Microsoft AI Reads Executives’ Confidential Emails for a Month, Bypassing Its Own
Photo by Przemyslaw Marczynski (unsplash.com/@pemmax) on Unsplash
30 days. Microsoft’s Copilot accessed confidential executive emails despite Information Protection labels, a breach that persisted for a month, according to a recent report.
Key Facts
- •Key company: Microsoft
Microsoft’s internal investigation traced the breach to a specific bug—identified as CW1226324—in the Copilot Chat data‑retrieval pipeline. The defect caused the AI to ignore Microsoft Information Protection (MIP) sensitivity labels on emails stored in the Sent Items and Drafts folders, allowing any message marked “Confidential,” “Highly Confidential,” or “Internal Only” to be indexed, summarized, and potentially displayed in Copilot responses (Moth, Mar 8). Because the labels remained technically present, the failure was not a lapse in the labeling system itself but a mismatch between the DLP enforcement layer and Copilot’s access logic. The company has not disclosed how many tenants or users were exposed, nor whether any confidential content actually surfaced to unauthorized recipients during the month the bug was active. Microsoft’s fix is still being monitored for completeness, and the firm has not confirmed any exfiltration of data (Moth, Mar 8).
The incident underscores a structural risk that differs from traditional zero‑day exploits: an enterprise AI product deliberately bypassing the vendor’s own security controls. For Fortune 500 customers, the value proposition of Microsoft’s compliance stack hinges on the guarantee that sensitivity‑labeled documents remain invisible to AI assistants. When that guarantee fails silently, the breach does not fit the classic definition of a data breach—no external actor stole files—but it creates an “unintentional insider” scenario where Copilot could surface fragments of board‑level compensation memos, merger discussions, or regulatory filings to users who are authorized to use Copilot but not to view those specific documents (Moth, Mar 8). The risk is therefore internal leakage through AI‑generated summaries rather than outright theft.
Industry analysts note that this is the third high‑profile case of enterprise AI tools sidestepping their own access controls. Palo Alto Networks’ Unit 42 2026 Global Incident Response Report, which examined more than 750 incidents, found that 99 % of cloud identities held excessive permissions and that the average breach‑to‑exfiltration window had shrunk to 72 minutes—down from 285 minutes a year earlier—partly because AI accelerates data movement (Moth, Mar 8). The report also highlighted that identity misconfigurations contributed to nearly 90 % of incidents, suggesting that AI assistants with broader access than the humans they serve amplify every prompt into a potential escalation (Moth, Mar 8). While 62 % of surveyed firms reported deep‑fake‑related attacks in the past year, the Copilot bug represents a distinct failure mode: the vendor’s own product unintentionally violating the data‑segmentation policies it was sold to enforce.
The BBC’s coverage confirmed that Microsoft publicly acknowledged the error, noting that the AI tool had “exposed confidential emails” and that the company was working to remediate the issue (BBC). Wired’s reporting on related Microsoft AI mishaps, such as the recall tool inadvertently ingesting credit‑card data, illustrates a broader pattern of over‑reach by Microsoft’s generative‑AI features across its ecosystem (Wired). Together, these accounts suggest that Microsoft’s rapid rollout of Copilot may have outpaced the maturation of its governance mechanisms, leaving large enterprises exposed to compliance and legal liabilities despite paying premium prices for the MIP suite.
For customers, the episode raises immediate questions about contractual assurances and auditability. If sensitivity labels cannot be trusted to block AI indexing, organizations may need to implement supplemental controls—such as manual segregation of high‑risk content or temporary disabling of Copilot for certain mailboxes—until Microsoft can demonstrably prove that the fix fully restores label enforcement across all folder types. The incident also pressures Microsoft to provide transparent metrics on the scope of exposure and to accelerate independent security reviews of its AI pipelines, lest the trust that underpins its enterprise revenue stream erode further.
Sources
No primary source found (coverage-based)
- Dev.to AI Tag
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.