Copilot spills the beans, summarizing emails it's not supposed to read
Summary
Microsoft 365 Copilot Chat summarizes confidential emails, bypassing sensitivity labels and data loss prevention policies. Microsoft acknowledged the "code issue" and is working on a fix.
Microsoft confirms Copilot security failure
Microsoft 365 Copilot Chat is summarizing emails marked as confidential even when administrators have configured data loss prevention (DLP) policies to block such actions. Microsoft acknowledged the security flaw in a notice to Office admins tracked as CW1226324, which first appeared on the UK National Health Service support portal. Customers originally reported the vulnerability on January 21, 2026, after noticing the AI bot accessing restricted content.
The flaw specifically affects the Microsoft 365 Copilot "work tab" within the chat interface. While organizations use sensitivity labels to prevent unauthorized data access, the AI tool currently bypasses these protections during active chat sessions. This failure allows the bot to process and display summaries of private communications that the system should technically ignore.
This security gap validates recent concerns from corporate boards regarding the rapid deployment of generative AI. Recent regulatory filings show that 72 percent of S&P 500 companies now list artificial intelligence as a material risk to their business operations. The ability of Copilot to circumvent internal security protocols highlights the difficulty of maintaining data sovereignty in an AI-driven environment.
Sensitivity labels fail in chat
Microsoft allows organizations to apply sensitivity labels manually or automatically to files and emails to comply with information security policies. These labels act as the primary defense against internal data leaks by restricting how different applications handle specific documents. However, Microsoft admitted that these labels do not function consistently across the entire 365 ecosystem.
Internal documentation confirms that while a sensitivity label might exclude content from Copilot in specific Office apps like Word or Excel, the content remains visible to the bot in other scenarios. This inconsistency creates a security loophole where data protected in a spreadsheet remains accessible via a Teams chat or the Copilot Work tab. The AI effectively pulls data from the Microsoft Graph, which may not always respect the same granular permissions as individual desktop applications.
The following Microsoft 365 components are currently involved in the data processing failure:
- Microsoft 365 Copilot Chat (Work Tab): The primary interface where confidential summaries appear.
- Microsoft Purview: The compliance engine meant to enforce DLP policies.
- Exchange Online: The email hosting service where the affected "Sent Items" and "Drafts" reside.
- Microsoft Teams: The collaboration platform where Copilot Chat often operates.
Microsoft Purview is designed to monitor and protect against oversharing across enterprise applications and endpoint devices. It targets locations like SharePoint and Exchange to ensure that sensitive data does not leave the organization. In this instance, the policy engine failed to communicate the restriction to the Copilot LLM (Large Language Model) during the indexing process.
Technical flaws in sent items
Microsoft identified the root cause of the leak as a code issue within the Copilot indexing service. This bug specifically allows items in the Sent Items and Drafts folders to be processed by the AI regardless of their sensitivity labels. Because many users store sensitive negotiations or legal strategies in their drafts, this vulnerability poses a significant risk for corporate espionage or accidental internal leaks.
The failure to respect DLP policies in these folders means that any employee with a Copilot license could potentially query the bot for information contained in restricted drafts. If an executive drafts a confidential layoff notice or a merger agreement, the bot might include those details in a summary provided to a lower-level staffer. This bypasses the least-privilege access model that most IT departments strive to maintain.
Microsoft 365 Copilot relies on a semantic index to map relationships between different pieces of corporate data. This index creates a sophisticated map of an organization's internal knowledge, but it also creates a single point of failure if the security filters do not work correctly. The current code issue suggests that the semantic indexer is prioritizing data retrieval over the policy constraints set by Microsoft Purview.
Corporate data remains at risk
The lack of a consistent security posture across the Microsoft 365 suite complicates the job of IT administrators. While Microsoft markets Copilot as an enterprise-grade tool with built-in protections, the reality of the CW1226324 notice suggests those protections are still under construction. Administrators must now manually verify whether their DLP policies are actually preventing AI data scraping.
Organizations often rely on automatic labeling to manage thousands of emails and documents generated daily. If the AI ignores these labels, the entire automated compliance framework collapses. This forces companies to choose between the productivity gains of AI and the legal necessity of data privacy. Many firms may choose to disable the "Work Tab" entirely until Microsoft provides a verified fix.
The current situation highlights several key risks for enterprise users:
- Unauthorized Data Aggregation: AI can connect disparate pieces of confidential data that a human would never find.
- Policy Circumvention: Established DLP rules for Exchange do not automatically translate to AI chat interfaces.
- Compliance Violations: Regulated industries like healthcare and finance may face fines if AI summarizes protected health information (PHI).
- Inconsistent UI: Security settings in the Outlook desktop app do not reliably sync with the Copilot web interface.
Microsoft has not yet provided a definitive timeline for a full remediation of the code issue. The company stated it is currently contacting affected customers to verify the effectiveness of temporary patches. Until a permanent fix arrives, the "confidential" label provides no guarantee that Copilot will keep a secret.
Microsoft works on a fix
The company is currently in the process of remediating the code issue to ensure that Sent Items and Drafts folders respect sensitivity labels. Microsoft engineers are reportedly testing a fix that forces the Copilot chat interface to re-verify DLP permissions before generating a response. This would prevent the bot from pulling data from any source marked with a restrictive policy.
IT admins can track the progress of this fix through the Microsoft 365 Admin Center. The company has encouraged users to report any further instances where confidential data appears in chat summaries. Microsoft did not respond to requests for comment regarding why these labels were not functioning as intended at launch.
For now, the burden of security falls back on the administrators. Organizations should consider auditing their Microsoft Purview settings and potentially restricting Copilot access for users who handle the most sensitive data. The incident serves as a reminder that AI integration often moves faster than the security frameworks designed to contain it.
The CW1226324 incident will likely prompt a broader review of how Microsoft handles cross-app data permissions. As the company pushes Copilot into every corner of the Office suite, the potential for similar "code issues" remains high. Enterprise customers will likely demand more transparency regarding how the semantic index interacts with sensitivity labels moving forward.
Related Articles
Poland bans camera-packing cars made in China cars from military bases
Poland banned Chinese cars and those with recording tech from military facilities due to data security risks. A vetting process for carmakers is planned.
US lawyers fire up privacy class action accusing Lenovo of bulk data transfers to China
Lenovo is accused of illegally transferring US user data to China via website trackers, violating Justice Department rules designed to prevent foreign adversaries from acquiring bulk data.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.
