Get App
Download App Scanner
Scan to Download
Advertisement

Whisper Leak Attack: Microsoft Warns About Unauthorised Access To Encrypted AI Chats By Hackers

Whisper Leak Attack: Microsoft Warns About Unauthorised Access To Encrypted AI Chats By Hackers
Microsoft reveals a major security flaw that puts your AI chats at risk. (Source: Freepik)
  • Microsoft revealed a vulnerability named Whisper Leak affects most server-based AI chatbots
  • The flaw exploits metadata visible in encrypted traffic, not breaking TLS encryption
  • Attackers can identify chatbot conversation topics on shared networks with high accuracy
Did our AI summary help?
Let us know.

Microsoft has disclosed a vulnerability affecting most server-based AI chatbots, which could allow hackers to identify the conversation topics on platforms such as ChatGPT and Gemini.

Named Whisper Leak, the flaw exploits a side-channel attack that targets remote large language model (LLM)-based chatbots. Microsoft said the flaw does not break encryption. It instead uses metadata in network traffic that is still visible even when messages are protected by Transport Layer Security (TLS). It's the same encryption used in online banking.

In a blog post, Microsoft said the new flaw could let ISPs, governments or anyone on the same Wi-Fi see what a user is discussing with an AI chatbot. The company stated that this vulnerability “poses real-world risks to users by oppressive governments where they may be targeting topics such as protesting, banned material, the election process, or journalism.”

Putting it in perspective, Microsoft said a government agency or internet service provider monitoring traffic to a popular AI chatbot could still identify users asking about sensitive topics, including money laundering, political dissent, or other monitored subjects, even though all the traffic is encrypted.

Microsoft researchers simulated a situation where an attacker could see the encrypted traffic but could not decrypt it. In many tested models, the attacker could identify target conversations with 100% accuracy while still catching 5% to 50% of those conversations.

The tech giant has engaged in “responsible disclosures with affected vendors”. Among the platforms already deploying protective measures are OpenAI, Mistral, xAI and Microsoft Azure, the company said in the blog post.

Warning AI chatbot users, Microsoft added, “The cyberthreat could grow worse over time. Avoid discussing highly sensitive topics over AI chatbots when on untrusted networks.”

Microsoft advised users to take extra precautions when using AI chatbots. They recommended using VPNs for added protection, choosing providers that have implemented security mitigations, opting for non-streaming models of large language models and staying informed about the security practices of their AI service providers.

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.

Newsletters

Update Email
to get newsletters straight to your inbox
⚠️ Add your Email ID to receive Newsletters
Note: You will be signed up automatically after adding email

News for You

Set as Trusted Source
on Google Search