Google Thwarts First AI-Assisted Zero-Day Cyberattack Attempt

google malaysia groundbreaking ceremony

Google’s Threat Intelligence Group (GTIG) has revealed what it describes as the first confirmed case of hackers using artificial intelligence to help develop a zero-day exploit intended for a large-scale cyberattack. In a report published on Monday, the company said it has “high confidence” that threat actors relied on an AI model to identify and weaponise a previously unknown software vulnerability.

The flaw reportedly allowed attackers to bypass two-factor authentication, potentially giving them broader access to compromised systems. Google said the hackers planned to use the exploit in a “mass exploitation event” before the company detected the activity and intervened. The company added that its proactive discovery may have prevented the exploit from being deployed on a larger scale.

AI stock
Image: Pexels

Attackers Allegedly Used Publicly Available AI

Google did not identify the threat actor involved, nor did it disclose the affected company or software. However, the company confirmed that it notified the targeted organisation, which has since patched the vulnerability.

Unsurprisingly, GTIG also warned that AI models are becoming increasingly capable for malicious purposes. These include abilities that can autonomously identify software weaknesses, analysing targets, generating malicious code, and assisting attackers with minimal human oversight.

The company also stressed that it does not believe its own AI model, Gemini, was involved in the incident. Instead, the attackers allegedly used other publicly available AI tools to discover and weaponise the vulnerability.

Phishing facing a potential scam
Image: Sora Shimazaki / Pexels

“A Taste Of What’s To Come”

The report highlighted growing concerns about how cybercriminals are incorporating AI into different stages of an attack. Google specifically pointed to tools such as OpenClaw, which attackers allegedly used to identify vulnerabilities, create malware, and assist with cyberattack development.

The company added that threat groups linked to China and North Korea have shown “significant interest” in using AI for vulnerability discovery and exploitation. In an interview with the New York Times, GTIG chief analyst John Hultquist described the discovery as “a taste of what’s to come” and “the tip of the iceberg”. He also called the incident the first “tangible evidence” of AI-assisted zero-day exploitation.

hacking anti-hacking security cybersecurity
Image: pixelcreatures / Pixabay

Fighting Fire With Fire

While the report focused heavily on the risks posed by AI-powered cyberattacks, Google noted that AI can also strengthen defensive cybersecurity efforts. Companies across the industry are increasingly using AI to automate vulnerability detection, monitor suspicious activity, and improve threat response systems.

Still, GTIG warned that this latest discovery may only mark the beginning of more sophisticated AI-assisted attacks in the future, as both cybercriminals and security firms continue to accelerate their use of advanced AI models.

claude opus file photo

Similar Concerns Elsewhere

GTIG’s findings also echo recent concerns raised by Anthropic regarding its own advanced AI systems. In April, the company reportedly delayed the rollout of its Mythos model due to fears that criminals could use it to uncover and exploit software vulnerabilities.

The decision reportedly sent shockwaves through the cybersecurity industry.In response, Anthropic introduced Project Glasswing, an initiative that uses its AI systems to help identify and defend against high-severity software vulnerabilities before attackers can exploit them. The company released the model to a limited group of testers that included Apple, CrowdStrike, Microsoft, and Palo Alto Networks.

(Source: GTIG, via Engadget)

Leave a Reply

Your email address will not be published. Required fields are marked *

Need Help?