ADVERTISEMENT

Whisper Leak Attack: Microsoft Warns About Unauthorised Access To Encrypted AI Chats By Hackers

Microsoft finds a flaw in AI chatbots that could let hackers see what users are talking about without breaking encryption on AI chatbots.

<div class="paragraphs"><p>Microsoft reveals a major security flaw that puts your AI chats at risk. (Source: Freepik)</p></div>
Microsoft reveals a major security flaw that puts your AI chats at risk. (Source: Freepik)
Show Quick Read
Summary is AI Generated. Newsroom Reviewed

Microsoft has disclosed a vulnerability affecting most server-based AI chatbots, which could allow hackers to identify the conversation topics on platforms such as ChatGPT and Gemini.

Named Whisper Leak, the flaw exploits a side-channel attack that targets remote large language model (LLM)-based chatbots. Microsoft said the flaw does not break encryption. It instead uses metadata in network traffic that is still visible even when messages are protected by Transport Layer Security (TLS). It’s the same encryption used in online banking.

In a blog post, Microsoft said the new flaw could let ISPs, governments or anyone on the same Wi-Fi see what a user is discussing with an AI chatbot. The company stated that this vulnerability “poses real-world risks to users by oppressive governments where they may be targeting topics such as protesting, banned material, the election process, or journalism.”

Opinion
'Suicidal For Microsoft': Elon Musk's Dig At OpenAI After Sam Altman Says Slack Creates 'Fake Work'

Putting it in perspective, Microsoft said a government agency or internet service provider monitoring traffic to a popular AI chatbot could still identify users asking about sensitive topics, including money laundering, political dissent, or other monitored subjects, even though all the traffic is encrypted.

Microsoft researchers simulated a situation where an attacker could see the encrypted traffic but could not decrypt it. In many tested models, the attacker could identify target conversations with 100% accuracy while still catching 5% to 50% of those conversations.

The tech giant has engaged in “responsible disclosures with affected vendors”. Among the platforms already deploying protective measures are OpenAI, Mistral, xAI and Microsoft Azure, the company said in the blog post.

Warning AI chatbot users, Microsoft added, “The cyberthreat could grow worse over time. Avoid discussing highly sensitive topics over AI chatbots when on untrusted networks.”

Microsoft advised users to take extra precautions when using AI chatbots. They recommended using VPNs for added protection, choosing providers that have implemented security mitigations, opting for non-streaming models of large language models and staying informed about the security practices of their AI service providers.

OUR NEWSLETTERS
By signing up you agree to the Terms & Conditions of NDTV Profit