In newly published research, Microsoft and OpenAI have detected attempts by Russian, North Korean, Iranian, and Chinese-backed groups using tools like ChatGPT for research into targets, to improve scripts, and to help build social engineering techniques.
“Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent,” says Microsoft in a blog post today.
…While the use of AI in cyberattacks appears to be limited right now, Microsoft does warn of future use cases like voice impersonation. “AI-powered fraud is another critical concern. Voice synthesis is an example of this, where a three-second voice sample can train a model to sound like anyone,” says Microsoft. “Even something as innocuous as your voicemail greeting can be used to get a sufficient sampling.”
Read more:
Warren, T. (2024, February 14). Microsoft and OpenAI say hackers are using ChatGPT to improve cyberattacks. The Verge. https://www.theverge.com/2024/2/14/24072706/microsoft-openai-cyberattack-tools-ai-chatgpt