Tech & Science
A previously unknown security flaw that could bypass two-factor authentication was exploited in what Google says was an AI-assisted attack, now patched.

A security vulnerability that could bypass two-factor authentication in a widely used online service management tool was the target of an attempted exploitation campaign, likely leveraging artificial intelligence. Google disclosed this in a new report, stating it discovered and closed the flaw in cooperation with the affected vendor, preventing it from escalating into a large-scale attack wave.
The warning is based on findings from Google’s Threat Intelligence Group (GTIG), which documents a significant rise in the use of generative AI tools by cybercriminals and state-backed actors. These applications span malware development, vulnerability discovery, phishing campaigns, and automated attacks.
Google explains that the targeted flaw was not a typical coding error but a “semantic logic bug” at the system design level. Such vulnerabilities are harder to detect than standard technical mistakes, and the company believes modern AI models are increasingly capable of identifying them due to their understanding of software context.
The report also notes several indicators in the exploit code suggesting it may have been generated using AI. These include unusual instructional documentation, misleading Common Vulnerability Scoring System (CVSS) assessments, and a structured coding style resembling data used to train machine learning models.
According to the report, the attacking group planned to use the vulnerability as part of a wider campaign after obtaining login credentials. The flaw would have allowed them to bypass two-factor authentication and gain unauthorized access to accounts.
GTIG stated that technical analysis strongly points to the use of an AI model in both discovering the vulnerability and developing the exploit, even though the use of Google’s own tools, such as Gemini, has not been confirmed.
The report warns of a more dangerous evolution toward autonomous malware. One example is an Android program called PROMPTSPY, believed to use AI interfaces to analyze a phone’s screen and execute commands like tapping, scrolling, and entering authentication codes in a semi-automated fashion.
In response, Google says it is developing defensive AI tools such as Big Sleep and CodeMender. These aim to automatically detect and patch security vulnerabilities before they can be exploited, in an effort to keep pace with the acceleration of digital threats.



