Google Reports First Known Case of AI-Developed Zero-Day Exploit Used by Cybercriminals

robot and human touching
Andriy Onufriyenko/Getty

Cybercriminals have been discovered using a zero-day security exploit that Google believes was developed with the assistance of AI, marking a significant milestone in the evolution of cyber threats.

Politico reports that Google announced on Monday that its researchers have identified what appears to be the first documented instance of hackers utilizing a zero-day exploit developed through artificial intelligence technology. The discovery was detailed in a report released by the Google Threat Intelligence Group and represents a concerning development in the cybersecurity landscape.

Zero-day exploits are among the most dangerous types of security vulnerabilities because they are unknown to security companies and lack available fixes or patches. These flaws can be exploited by attackers before developers have an opportunity to address them, making them particularly valuable to cybercriminals and state-sponsored hacking groups.

The report marks the first time Google has observed evidence suggesting that AI was used in the discovery and development of such vulnerabilities. This development comes at a time when major artificial intelligence companies, including Anthropic and OpenAI, have begun testing advanced models specifically designed to identify and exploit critical software vulnerabilities with capabilities that surpass most human researchers.

According to Google’s analysis, the zero-day exploit was most likely not developed using Anthropic’s Claude Mythos model, despite that system’s impressive track record. The Mythos model has already identified thousands of vulnerabilities across all major operating systems and web browsers, demonstrating the potential power of AI in cybersecurity research.

Both the Mythos model and OpenAI’s recently announced GPT-5.5-Cyber model have attracted significant attention from the Trump administration. Government officials are currently holding ongoing meetings with industry groups to discuss potential regulatory frameworks and vetting procedures for these frontier AI models.

Before releasing its public report, Google notified the unnamed company affected by the vulnerability, allowing them to develop and release a patch to fix the security issue. This responsible disclosure approach is standard practice in the cybersecurity community.

John Hultquist, chief analyst at Google Threat Intelligence Group, emphasized the significance of the findings in a statement. “For every zero-day we can trace back to AI, there are probably many more out there,” Hultquist said. He added that the discovery makes clear that the race to use AI to find network vulnerabilities has “already begun.” Hultquist further noted that “threat actors are using AI to boost the speed, scale, and sophistication of their attacks.”

Breitbart News previously reported that Anthropic is investigating unauthorized access to its powerful Mythos AI:

The incident surfaces serious questions about whether Anthropic, valued at approximately $380 billion, can effectively safeguard its most powerful technologies from falling into the hands of malicious actors. The company had intentionally restricted the release of Claude Mythos Preview to a select group of technology firms, explicitly citing concerns that the model could be misused to launch cyber attacks at a scale and speed exceeding human capabilities.

One individual who obtained unauthorized access reportedly leveraged their permissions as a contractor for Anthropic to tap into Mythos. The company stated it had no evidence of activity extending beyond the “vendor environment,” the infrastructure that third parties use to access systems for model development. AI laboratories frequently employ third-party contractors for responsibilities such as model testing, though Anthropic did not identify which specific vendor was implicated in this incident.

The instant bestseller Code Red: The Left, the Right, China, and the Race to Control AI,  written by Breitbart News social media director Wynton Hall, serves as a blueprint for conservatives to create effective policies around AI not only for the nation, but also their family. This becomes even more important when the AI giants themselves are struggling to secure their AI models and cyber crooks are developing new attacks every day.

Read more at Politico here.

Lucas Nolan is a reporter for Breitbart News covering issues of AI, free speech, and online censorship.

COMMENTS

Please let us know if you're having issues with commenting.