Google has disclosed the first known instance of a zero-day vulnerability created with the help of artificial intelligence. The discovery was made by the company's GTIG cyber threat analysis unit. According to the AI Threat Tracker report, attackers employed an AI model to identify and exploit a flaw in a popular open-source web administration tool. The vulnerability made it possible to bypass two-factor authentication. Experts say they halted preparations for a major campaign before any widespread attacks could begin. Google and the software's developers promptly addressed the issue.
GTIG reports that the exploit was written in Python and was very likely generated by AI. Telltale signs included excessive inline comments, unusual code structure, detailed help menus, and even a fabricated CVSS score—something real malware authors typically don't include.
According to experts, the flaw was a logic error. The authentication system's developer introduced a contradiction in the access-checking logic, making it possible to circumvent the two-factor mechanism. Traditional security scanners missed it, but AI caught it by analyzing both the code and the intended application logic. The report specifically notes that the attackers did not use Google's Gemini models or Anthropic's solutions.
Experts see this incident as marking the start of a new phase in cyber threats. Where automated tools once focused on memory errors or access failures, modern language models can now analyze application architecture and uncover hidden logical contradictions that conventional defenses barely register.
GTIG chief analyst John Hultquist stated that the AI vulnerability race is not just inevitable—it has already started. He added that for every zero-day vulnerability traceable to AI, there are probably many more.
The report also highlights other AI-driven cyberattacks. North Korea's APT45 group leveraged AI for automated vulnerability analysis and exploit development, while China-linked attackers crafted special prompts to probe remote code execution flaws in TP-Link routers.
GTIG also documented AI being used to generate malicious code, create fake audio, and develop Android backdoors that interact with the Gemini API.
Special attention was given to an attack on the LiteLLM library, which integrates AI services. Attackers slipped malicious code into infected PyPI packages, stealing AWS keys and GitHub tokens. Analysts note that these attacks are increasingly aimed not at the AI models themselves, but at the surrounding infrastructure—APIs, connectors, and integration tools.
Cybercriminals are also exploiting the popularity of AI services as a lure. Earlier, researchers found fake Claude AI websites distributing malware via Google search ads.