Microsoft, OpenAI Claim State-Sponsored Hackers Are Using AI to Boost Cyberattacks

China - Top government leaders told NPR that federal agencies are years behind where they
Bill Hinton Photography/Getty Images

State-affiliated hacking groups are increasingly using AI to expand and improve their cyberattack capabilities, a new report from Microsoft and OpenAI claims.

Yahoo News reports that a joint study released on Wednesday by Microsoft and ChatGPT developer OpenAI reveals how hacking groups from China, Iran, North Korea, and Russia are probing the use of AI large language models (LLMs) to enhance their cyber warfare efforts.

The report identifies efforts by four known state-sponsored groups – Russia’s Forest Blizzard, North Korea’s Emerald Sheet, Iran’s Crimson Sandstorm, and two Chinese groups called Charcoal Typhoon and Salmon Typhoon. It documents how these groups are leveraging LLMs to learn about vulnerabilities in public software, improve social engineering tactics, develop malicious code that can bypass antivirus software, and more.

TV news at Seoul’s Yongsan Railway Station shows North Korean leader Kim Jong Un (KIM Jae-Hwan/SOPA Images/LightRocket via Getty)

For example, North Korea’s Emerald Sheet has allegedly used the technology to better understand software vulnerabilities for exploitation. The group has also allegedly used LLMs to improve phishing emails aimed at think tanks working on North Korea’s nuclear program. Meanwhile, Russia’s Forest Blizzard has tapped the AI systems to learn about satellite communications and radar technologies.

According to the report, “Cybercrime groups, nation-state threat actors, and other adversaries are exploring and testing different AI technologies as they emerge, in an attempt to understand potential value to their operations and the security controls they may need to circumvent.”

Microsoft says the move represents an escalation in the cyber arms race, with state actors now leveraging cutting-edge AI to pursue their geopolitical goals and expand offensive capabilities. While AI may hold tremendous potential to benefit society, this development highlights the dual-use nature of the technology and the need to prevent misuse.

As AI systems grow more powerful thanks to advances in natural language processing, they could give threat actors an edge in probing system vulnerabilities, crafting convincing scams, and developing sophisticated malware. However, Microsoft and OpenAI emphasize that LLMs also present new opportunities to bolster cybersecurity defenses if harnessed responsibly.

Read more at Yahoo News here.

Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.