Get App
Download App Scanner
Scan to Download
Advertisement

Google Stops Zero-Day Attack For First Time After Hackers Used AI To Exploit Software Flaw

The company did not reveal the identity of the hacker group but noted that its own Gemini model was not involved.

Google Stops Zero-Day Attack For First Time After Hackers Used AI To Exploit Software Flaw
A Google report showed that it stopped an attack from hackers who used AI.
Unsplash

Google's Threat Intelligence Group said on Monday that it stopped an attack from hackers who used artificial intelligence to “plan a mass vulnerability exploitation operation.” In a report, the group expressed high confidence that hackers employed an AI tool to identify a flaw enabling them to bypass two-factor authentication. The cyber criminals intended to deploy it in a mass exploitation campaign. However, Google intervened by alerting the AI tool's developer, which likely prevented its use and resulted in the attack being thwarted.

“The criminal threat actor planned to use it in a mass exploitation event but our proactive counter discovery may have prevented its use,” the report from Google read. The company, however, did not reveal the identity of the hacker group.

This marks the first time Google's Threat Intelligence Group spotted threat actors leveraging an AI model to discover and weaponise a zero-day vulnerability, which is an unknown software flaw for which no patch exists. The company noted that its own Gemini model was not involved.

AI Tools Like OpenClaw Being Used To Carry Out Cyberattacks

Google also cited in its report several examples of hackers using AI tools like OpenClaw to discover vulnerabilities, create malware, and carry out attacks. The report noted that China- and North Korea-linked cybercrime groups “demonstrated significant interest in capitalising on AI for vulnerability discovery.” These groups pose growing risks to organisations and governments even as cybersecurity firms rush to outdo AI-powered cyberattacks.

The report comes on the heels of rising concerns around the potential risks AI tools carry in the cybersecurity domain. In April, Anthropic's Claude Mythos model was the focal point of discussions by global governments and financial organisations over its potential misuse by criminals looking to exploit software flaws. The model has been released to a limited number of testers, including Apple, Microsoft, CrowdStrike, and Palo Alto Networks.

Also read: Your Android-To-iPhone RCS Messages (And Vice-Versa) Are Now End-To-End Encrypted

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.

Newsletters

Update Email
to get newsletters straight to your inbox
⚠️ Add your Email ID to receive Newsletters
Note: You will be signed up automatically after adding email

News for You

Set as Trusted Source
on Google Search
Add NDTV Profit As Google Preferred Source