Anthropic's Project Glasswing: AI Cybersecurity Initiative Involving Microsoft, Apple, AWS, Nvidia Explained

The most notable aspect of Claude's AI cybersecurity model was discovering undetected vulnerabilities that remained 27 years after launch.

Advertisement
Read Time: 3 mins
Due to it's potential for misuse, the AI model is unreleased and is being used and tested internally.
Photo Source: Envato

Anthropic announced Project Glasswing on Wednesday, an AI cybersecurity initiative that brings together major names in the IT and digital tech infrastructure space who are working on testing Anthropic's Claude Mythos Preview, an unreleased AI model specialising in finding exploits in software systems that would otherwise go undetected.

Due to it's potential for misuse, the AI model is unreleased and is being used and tested internally by its "launch partners". They include AWS (Amazon Web Services), Apple, Google, Microsoft, Broadcom, Cisco, CrowdStrike, JPMorganChase, the Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks.

Advertisement

"As part of Project Glasswing, the launch partners listed above will use Mythos Preview as part of their defensive security work; Anthropic will share what we learn so the whole industry can benefit," Anthropic said in its blogpost.

Anthropic also expanded access to 40 more organizations that create and maintain critical software infrastructure so they can use their AI model to scan and assess critical vulnerabilities in first party and open source systems.

"Anthropic is committing up to $100M in usage credits for Mythos Preview across these efforts, as well as $4M in direct donations to open-source security organizations," the blog post said.

ALSO READ: What Is Anthropic's Claude Mythos And Why Will It Bring A 'Wave Of AI-Driven Exploits'?

Advertisement

What Makes Claude Mythos Preview Stand Out

Claude Mythos Preview was able to find bugs and formulate exploits in software, especially with regards to Zero Day vulnerabilities. These are flaws in software that developers are not aware of and hence have no defence against at the time of their discovery. The most notable aspect being discovery of vulnerabilities that were undetected for close to 27 years after these systems were launched.

This was the case with OpenBSD which had an exploit through which a malicious hack could crash any machine using its operating system by merely connecting to it. OpenBSD is widely used by secure servers, firewalls, and network appliances. It further found a 16 year old flaw in FFmpeg, which is used to encode and decode videos, which was in a line of code that "automated testing tools had hit five million times without ever catching the problem".

Advertisement

It also found and grouped together several vulnerabilities in the Linux kernel, on which the majority of servers run. These vulnerabilities can provide an attacker complete control over a machine via ordinary user access.

Why This Matters 

Anthropic's Claude Mythos Preview is designed to combat the disruptive potential that AI can pose in augmenting cyber hacks making them more effective. As well as preventing situations where rogue organisations or countries use cyber attacks to target vital networks such as healthcare, transport and energy, which could endanger lives. 

The company claims that AI can also be used to ensure the reverse, where AI is used to strengthen cybersecurity.

"Although the risks from AI-augmented cyberattacks are serious, there is reason for optimism: the same capabilities that make AI models dangerous in the wrong hands make them invaluable for finding and fixing flaws in important software—and for producing new software with far fewer security bugs," the blog said.

Anthropic also said that it is in talks with US government officials regarding Claude Mythos Preview and its "offensive and defensive cyber capabilities."

ALSO READ: Anthropic Drops Claude Risk Report Days After Its AI Safety Chief Resigns

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.

Loading...