Microsoft Says Bug Led Copilot To Summarise Confidential Emails For Weeks

The bug led Microsoft's Copilot Chat to read and outline the contents of confidential emails since January.

Advertisement
Read Time: 3 mins
Quick Read
Summary is AI-generated, newsroom-reviewed
  • Microsoft confirmed a bug let Copilot Chat summarize confidential emails without permission since January
  • The bug bypassed data loss prevention policies affecting emails in Sent Items and Drafts folders
  • Microsoft began fixing the issue in early February and is monitoring the solution's deployment
Did our AI summary help?
Let us know.

Microsoft has confirmed that a bug allowed its AI assistant, Copilot, to summarise confidential emails for weeks without permission. 

The bug, first reported by Bleeping Monitor, caused Microsoft's Copilot Chat to read and outline the contents of confidential emails since January, even bypassing data loss prevention (DLP) policies that are used by organisations to protect sensitive information.

Microsoft 365 Copilot Chat is the company's artificial intelligence-powered, content-aware chat that enables user interaction with AI agents. In September 2025, the firm started rolling out Copilot Chat to PowerPoint, Word, Excel, Outlook, and OneNote for paying Microsoft 365 business customers.

Advertisement

What Happened To Copilot Chat?

As per Bleeping Monitor, the bug, tracked internally as CW1226324, was first detected on January 21. It affects the AI assistant's "work tab" chat feature, incorrectly reading and summarising emails stored in users' Sent Items and Drafts folders. This includes messages with confidentiality labels explicitly designed to restrict access by automated tools.

Microsoft has not revealed how many users or firms were affected by the bug. It only stated that the scope of impact may change as the company's probe continues.

Advertisement

What Microsoft Said On The Matter

The tech giant confirmed the issue and said, "Users' email messages with a confidential label applied are being incorrectly processed by Microsoft 365 Copilot chat," as per Bleeping Monitor.

Microsoft stated that an unspecified code error was responsible for the bug. It added that it had started rolling out a fix in early February. As of Feb. 18, the company was monitoring the deployment of the fix and was reaching out to a subset of affected users to check that the solution was working. 

Advertisement

The American tech giant has not provided a final timeline for the full resolution. 

The news comes as the European Parliament reportedly blocked lawmakers from using built-in AI tools on their work devices, citing cybersecurity and privacy concerns.

An email by the parliament's IT department, seen by Politico, said that the department could not guarantee the security of the AI tools' data.

The email read, “As these features continue to evolve and become available on more devices, the full extent of data shared with service providers is still being assessed. Until this is fully clarified, it is considered safer to keep such features disabled."

The European Parliament's move to switch off AI tools was focused on built-in features such as writing and summarising assistants, webpage summaries in both phones and tablets and enhanced virtual assistants, an EU official told Politico. Apps, calendar, documents, email, and other day-to-day tools were not affected, the email to lawmakers stated.

Advertisement

Also Read: Microsoft's AI Chatbot Copilot To Exit WhatsApp In January 2026: What's Changing For Users?

Essential Business Intelligence, Continuous LIVE TV, Sharp Market Insights, Practical Personal Finance Advice and Latest Stories — On NDTV Profit.

Loading...