Microsoft says Office bug exposed customers’ confidential emails to Copilot AI
Summary
Microsoft Copilot had a bug that summarized confidential customer emails without permission since January, bypassing data loss prevention. A fix is now rolling out.
Microsoft Copilot ignored confidential email labels
Microsoft confirmed a bug allowed its Copilot AI to summarize confidential customer emails without authorization for several weeks. The flaw bypassed security protocols designed to keep sensitive corporate data out of large language models.
The company tracked the issue as bug CW1226324. It specifically affected draft and sent messages that users had marked with "confidential" labels. Copilot Chat incorrectly processed these messages despite active Data Loss Prevention (DLP) policies.
The privacy failure began in January 2024 and persisted until Microsoft began deploying a fix in early February. Microsoft has not disclosed exactly how many enterprise customers saw their private data ingested by the AI during this window.
How the email bug worked
Microsoft 365 Copilot functions by "grounding" its responses in a user’s internal data, including files, calendar invites, and emails. Enterprise customers pay a premium for this service specifically because Microsoft promises that internal security labels will prevent sensitive data from leaking into the AI’s chat interface.
The bug broke this promise by ignoring the metadata attached to sensitive emails. When a user asked Copilot to summarize their recent communications, the AI pulled content from messages it was supposed to ignore. This included both drafts and sent items that carried strict sensitivity headers.
This failure undermines the core value proposition of Microsoft’s $30 per user, per month Copilot for Microsoft 365 subscription. Companies rely on DLP policies to ensure that trade secrets, legal documents, and private employee information remain restricted to specific departments.
Security policies failed to stop AI
Data Loss Prevention policies serve as the primary defensive layer for modern corporations. These rules typically prevent users from sharing sensitive files outside the company or uploading them to unauthorized cloud services. Microsoft’s marketing materials claim that Copilot respects these boundaries by design.
The CW1226324 bug proved that these boundaries are more porous than Microsoft previously suggested. By allowing the AI to read labeled "confidential" content, Microsoft effectively bypassed the very security controls it sells to IT administrators. This creates a significant liability for firms in highly regulated sectors like finance and healthcare.
Administrators use the Microsoft Purview suite to manage these labels. Under normal circumstances, Purview should act as a hard gate for any generative AI tool. The fact that a bug could override these permissions suggests a fundamental gap in how Copilot interacts with the Microsoft Graph.
- Affected Services: Microsoft Outlook, Copilot Chat, and Microsoft 365 web apps.
- Vulnerable Data: Draft emails, sent messages, and attachments with sensitivity labels.
- Duration: Approximately four to five weeks starting in January.
- Resolution: A server-side fix rolled out in early February.
Lawmakers block Microsoft AI tools
The European Parliament’s IT department has already taken action following concerns over AI data handling. Lawmakers recently received notice that built-in AI features are now blocked on their work-issued devices. The IT department cited the risk of confidential correspondence being uploaded to the cloud without oversight.
This ban reflects a growing distrust of how "black box" AI models handle sovereign data. While Microsoft offers "commercial data protection," the existence of this bug confirms that software errors can still expose data to the underlying model. Lawmakers are concerned that once information enters a chat history, it becomes much harder to purge.
The European Parliament is not alone in its skepticism. Several global banks and technology firms, including Samsung and Apple, have previously restricted the use of generative AI tools. These companies fear that employees might inadvertently paste proprietary code or strategic plans into prompts.
The high cost of privacy errors
Microsoft is currently pushing to integrate Copilot into every facet of the enterprise workflow. The company recently rebranded its "Bing Chat" as Copilot and launched a dedicated Copilot Pro tier for individuals. However, the enterprise version is where the real revenue lies, and that revenue depends entirely on trust.
Security researchers have warned that "prompt injection" and data leakage are the primary risks of the current AI boom. If an attacker or an unauthorized employee can trick the AI into summarizing a file they shouldn't see, the company's entire security architecture fails. This bug represents a passive version of that risk, where the AI simply ignored existing rules.
Microsoft's lack of transparency regarding the number of affected users is typical for the industry but frustrating for IT managers. Without a specific list of compromised accounts, companies are forced to conduct their own forensic audits to see if sensitive intellectual property was exposed to the AI's training or inference logs.
Microsoft faces a credibility crisis
This incident follows a string of high-profile security lapses at Microsoft. Last year, the Storm-0558 hack allowed Chinese attackers to access the email accounts of senior U.S. government officials. The Cyber Safety Review Board later issued a scathing report calling Microsoft’s security culture "inadequate."
CEO Satya Nadella has since announced the Secure Future Initiative (SFI). This program supposedly prioritizes security over new features. Yet, the rapid rollout of Copilot features across Word, Excel, and PowerPoint seems to conflict with that goal. The pressure to compete with Google Gemini and OpenAI is driving a release cycle that clearly leaves room for critical bugs.
For now, Microsoft says the email summarization bug is resolved. But for IT administrators, the damage to the "zero trust" model is significant. They must now decide if the productivity gains of AI summarization are worth the risk of a future bug exposing the company's most sensitive secrets.
Microsoft's current AI roadmap includes the following integrations:
- Outlook: Summarizing long email threads and drafting replies.
- Teams: Generating meeting minutes and action items in real-time.
- Excel: Analyzing complex data sets and generating formulas via natural language.
- Word: Writing entire reports based on internal company documents.
Each of these integrations provides a new vector for data leakage if security labels are not perfectly enforced. As Microsoft continues its AI-first pivot, the CW1226324 bug serves as a reminder that even the most expensive enterprise software is not immune to basic permission errors.
Related Articles
Open-source benchmark EVMbench tests how well AI agents handle smart contract exploits
EVMbench is an open-source benchmark from OpenAI and Paradigm that tests AI agents on detecting, patching, and exploiting real smart contract vulnerabilities. It uses 120 curated flaws to provide automated, repeatable evaluations of AI security analysis capabilities.
Poland bans camera-packing cars made in China cars from military bases
Poland banned Chinese cars and those with recording tech from military facilities due to data security risks. A vetting process for carmakers is planned.
Stay in the loop
Get the best AI-curated news delivered to your inbox. No spam, unsubscribe anytime.
