Microsoft bug exposes confidential emails to Copilot AI
Photo by Maxim Hopman on Unsplash
Microsoft pitched Copilot as your trusted AI assistant, a guardian of productivity. But according to TechCrunch AI, a bug betrayed that promise, exposing customers’ confidential emails for weeks and letting the AI freely summarize private information it was explicitly told to leave alone.
Key Facts
- •Key company: Microsoft
The flaw, which Microsoft has tracked as CW1226324, specifically bypassed data loss prevention (DLP) policies that organizations had set up to keep sensitive information out of the AI's reach. According to TechCrunch AI, this meant that draft and sent emails marked with a "confidential" label were incorrectly processed by Microsoft 365 Copilot chat, a feature available to paying enterprise customers. The AI assistant, integrated into Office applications like Word and Excel, was then able to read and summarize the contents of these private communications upon user request.
Microsoft confirmed the existence of the bug and stated that it began rolling out a fix earlier in February. The company’s acknowledgment, however, came with a notable silence on the scale of the incident. As reported by TechCrunch AI, a Microsoft spokesperson did not respond to requests for comment regarding how many customers may have had their data exposed or for exactly how long the vulnerability was active. The bug's reported activity since January suggests a window of at least several weeks where confidential information was potentially accessible.
This incident arrives at a moment of heightened scrutiny for AI tools in enterprise environments. Just this week, the European Parliament’s IT department made a decisive move, blocking built-in AI features on work-issued devices for lawmakers. Their concern, as noted in the TechCrunch report, was the potential for AI tools to upload confidential correspondence to cloud servers without proper oversight—a fear that Microsoft’s bug has now substantiated.
The technical breakdown highlights a critical tension in the rush to integrate generative AI into every facet of productivity software. These systems are designed to be helpful, to digest and summarize information to save time. But that core function is a direct threat to data governance when safeguards fail. A feature meant to empower employees inadvertently became a potential data exfiltration tool, turning Copilot from a guardian of productivity into a liability.
This is not an isolated concern in the burgeoning field of corporate AI. Additional coverage from Ars Technica and ZDNet points to a broader pattern of security challenges. ZDNet’s report on “exploitable agents” from Microsoft and ServiceNow describes a growing, and preventable, AI security crisis, warning that once deployed on corporate networks, these powerful tools can become a threat actor’s fantasy if their privileges are not strictly limited.
Microsoft’s challenge now is one of trust. The company has heavily marketed Copilot as an indispensable, intelligent partner for business, a system that understands the boundaries of your work. A bug that allows it to ignore the most fundamental of those boundaries—the "confidential" label—strikes at the very value proposition it is selling. For IT administrators who spent time carefully configuring DLP policies to comply with industry regulations, the revelation that a backend error could nullify their work is a nightmare scenario.
The quiet rollout of a fix is a technical solution, but the broader question of how such a flaw slipped through testing and remained active for weeks remains unanswered. Without clarity on the scope of the impact, customers are left to wonder if their most sensitive emails were briefly, and silently, made available to an AI that was supposed to protect them. In the high-stakes game of enterprise security, trust is the most valuable feature, and it’s one that is much harder to patch.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.