For years, the cybersecurity industry has warned that AI would eventually be weaponized by hackers. That theoretical future just became the present.
Google’s threat intelligence team has identified what it describes as likely the first documented case of cybercriminals using a large language model to discover and exploit a zero-day vulnerability in the wild. The target: a flaw in a widely used open-source system administration tool that allowed attackers to bypass two-factor authentication.
The vulnerability was found in a Python script within a popular open-source login platform. Attackers identified a flaw that, when exploited, could circumvent the 2FA protections that millions of users and organizations rely on as a critical second layer of security.
Here’s what makes this case different from every previous cyberattack. The exploit code itself appears to have been generated by an AI model. Google’s researchers linked the code to telltale signs of LLM output, including unusually verbose inline comments and coding patterns characteristic of AI-generated text rather than human-written scripts.
Google coordinated with the affected vendor to patch the vulnerability before any confirmed damage occurred.
Zero-day vulnerabilities, by definition, are flaws that the software vendor doesn’t know about yet. Finding them has traditionally required deep technical expertise, patience, and significant time investment. That’s what made zero-days rare and expensive. A single zero-day exploit can sell for hundreds of thousands of dollars on underground markets precisely because they’re so hard to find.
Google’s researchers have noted that state actors in China and North Korea are reportedly utilizing AI to explore potential exploits at scale.
The specific vulnerability in this case involved bypassing two-factor authentication, which is one of the foundational security measures used across cryptocurrency exchanges, DeFi platforms, and wallet providers.
Exchanges and DeFi protocols commonly rely on open-source tools and libraries for authentication, access control, and transaction signing. If AI can systematically probe these codebases for vulnerabilities that human auditors have missed, the attack surface for the entire industry expands.
DeFi platforms face a related but distinct risk. Many decentralized protocols integrate with open-source components at various layers of their stack. Smart contract audits have become standard practice, but the security of surrounding infrastructure, including login systems, admin panels, and API gateways, doesn’t always receive the same scrutiny. AI-discovered vulnerabilities in those layers could provide attackers with indirect paths to funds that smart contract audits would never catch.
Projects and exchanges that rely heavily on open-source authentication tools should be conducting immediate reviews of their dependencies. The patch for this specific vulnerability was deployed before exploitation caused confirmed damage, but the next AI-discovered zero-day might not come with a warning from Google’s threat intelligence team.
Parliamentary standards watchdog formally launched a probe into Nigel Farage over an undeclared £5M gift…
The final stragglers testified on Wednesday in the Musk v. Altman trial. The witnesses generated…
Productivity software maker Notion is stepping into the agentic era. In a live-streamed product announcement…
Key TakeawaysKevin Warsh secured Senate confirmation as Federal Reserve chairman after closely watched party-line votes.Supporters…
A Discord wallet flex allegedly exposed connections between Dritan Kapllani Jr. and a massive…
Googlebook brings Gemini Intelligence, Magic Pointer, Android app support, phone integration, and premium hardware to…