Microsoft has confirmed a software bug in Microsoft 365 allowed its Copilot AI to summarize customers’ confidential emails for weeks despite data protection settings intended to block that processing. The issue, flagged to administrators under advisory ID CW1226324, affected Copilot Chat across Office apps and prompted an accelerated fix rollout.
What Microsoft Says Happened During the Copilot Email Bug
According to Microsoft’s advisory, draft and sent messages labeled as confidential were “incorrectly processed by Microsoft 365 Copilot chat,” enabling the AI assistant to read and outline content that should have been off-limits. The company began deploying a fix and says remediation is in progress across affected tenants. Microsoft has not disclosed how many customers were impacted.
- What Microsoft Says Happened During the Copilot Email Bug
- How the bug slipped past data protections and labels
- Who was affected by the bug and for how long it lasted
- Enterprise and regulatory fallout from the incident
- Risk mitigation steps for customers and next actions
- Why this Copilot incident matters for AI in the office
The exposure came to light after administrators noticed Copilot returning summaries of protected emails, a behavior first reported by BleepingComputer. While Copilot for Microsoft 365 is designed to respect sensitivity labels and data loss prevention (DLP) rules, this bug appears to have bypassed those controls for certain labeled messages.
How the bug slipped past data protections and labels
In normal operation, Microsoft Purview Information Protection and DLP policies gate what Copilot can retrieve when grounded in a user’s Microsoft Graph data. Labels like “Confidential” or “Highly Confidential” typically restrict processing to prevent exactly the kind of summarization that occurred here. Microsoft’s notice indicates the misbehavior centered on how labeled email content was evaluated before being passed to Copilot Chat, resulting in unauthorized processing rather than a misconfiguration by customers.
It is important to distinguish between processing and disclosure. Current evidence points to Copilot presenting summaries back to the querying user within the same tenant, not broadcasting contents across organizations. Even so, the action breached policy boundaries that many enterprises rely on for regulatory compliance and internal governance.
Who was affected by the bug and for how long it lasted
The bug impacted paying Microsoft 365 customers using Copilot Chat in Office apps such as Outlook, Word, Excel, and PowerPoint. Administrators reported the behavior persisting for several weeks before Microsoft initiated its fix. There is no public indication of cross-tenant leakage, but organizations with shared mailboxes, delegated access, or role-based mailbox viewing rights may have faced broader internal exposure via AI-generated summaries.
Microsoft emphasizes that Copilot for Microsoft 365 does not use customer data to train foundation models, a safeguard that reduces the risk of persistent data retention outside a tenant boundary. Nonetheless, the incident reinforces that enforcement points around labeling and DLP must work flawlessly to prevent unintended processing.
Enterprise and regulatory fallout from the incident
The timing aligns with rising institutional caution around embedded AI. The European Parliament’s IT department recently disabled built-in AI features on lawmakers’ devices over concerns that sensitive correspondence could be uploaded and processed in the cloud. Incidents like this will likely intensify scrutiny from data protection officers and regulators, especially under regimes such as the GDPR where processing beyond stated purposes can trigger notification, assessment, or enforcement obligations.
For heavily regulated sectors—financial services, healthcare, public sector—the episode will fuel board-level questions about AI guardrails, auditability, and the reliability of sensitivity labels as a control layer. It also underscores the need to validate vendor assurances with hands-on testing and continuous monitoring.
Risk mitigation steps for customers and next actions
Security leaders should verify the advisory CW1226324 status in the Microsoft 365 admin center and confirm that remediation has reached their tenant. Where feasible, temporarily restricting Copilot Chat for high-risk groups or highly sensitive mailboxes can reduce exposure while validating the fix.
Auditors should review Microsoft Purview audit logs for anomalous Copilot Chat interactions involving labeled content and re-run DLP policy match reports to identify messages that may have been summarized. Reassessing sensitivity label scoping, enforcing conditional access for Copilot features, and tightening privileges around shared or delegated mailboxes can further minimize risk.
Finally, communicate clearly with employees: remind users not to query AI with information beyond their role, and establish a rapid channel for reporting unexpected Copilot behavior. If confidential material may have been surfaced, consider targeted reclassification, revocation of shared access, and, where applicable, legal or regulatory notifications guided by counsel.
Why this Copilot incident matters for AI in the office
Generative AI’s value inside productivity suites depends on invisible policy checks that run before a model ever sees user data. When those checks fail, even briefly, organizations face real governance exposure. The Copilot incident is a reminder that AI adoption must go hand in hand with rigorous testing of label enforcement, layered controls beyond labels, and continuous validation that vendor fixes actually work in production.
Enterprises will keep deploying AI because the productivity upside is substantial. But the path forward demands robust AI governance—clear data boundaries, least-privilege access for assistants, and a feedback loop between security teams and line-of-business users. Microsoft’s fix may close this specific flaw; the larger lesson is to treat AI safeguards as critical infrastructure, not optional settings.