Anthropic’s Official MCP Servers Fail Audit, Exposing a Major Compliance Gap
Photo by Maxim Hopman on Unsplash
18,000+ MCP servers power Claude Code, Cursor and other AI tools, yet a recent audit found Anthropic’s official servers fail compliance checks, exposing a major regulatory gap.
Key Facts
- •Key company: Anthropic
The audit, conducted with the open‑source mcp‑security‑audit tool, scanned all 18 000+ official Model Context Protocol (MCP) servers that power Anthropic’s Claude Code, Cursor, Windsurf and other developer‑focused AI products. According to the report posted by researcher manja316 on March 5, six of the servers returned clean scores, but two failed compliance checks and one—designated “server‑filesystem”—scored a disastrous 7 out of 100, earning an F grade (manja316). The server‑filesystem endpoint exposes fourteen file‑system tools, including read_file, write_file, delete_file and move_file, yet the audit uncovered seven critical findings that directly contravene multiple provisions of the forthcoming EU AI Act.
First, thirteen of the fourteen tools lack any descriptive metadata. Tool descriptions are the primary mechanism by which a large language model (LLM) interprets a tool’s purpose and safe usage; without them the model operates on guesswork. The report flags this omission as a violation of Article 13 (Transparency), which requires that users be able to assess a system’s capabilities and limitations. Second, the audit identified 28 string parameters across the filesystem tools that have no input constraints—no regex patterns, length limits, or enumerated whitelists. Unconstrained path strings enable the LLM to construct arbitrary file paths, including directory‑traversal attacks such as ../../etc/passwd. This breaches Article 15 (Accuracy, Robustness, Cybersecurity), which mandates input validation controls to protect against malformed or adversarial inputs.
Third, the destructive tools delete_file, write_file and move_file are completely undocumented: they have no descriptions, no warnings about irreversibility, and no confirmation mechanisms. Article 9 (Risk Management) obliges providers to identify and mitigate foreseeable risks; the absence of safeguards for file‑deletion operations represents a clear oversight. Fourth, the server does not expose any version string or other identification metadata, making it impossible for auditors or customers to verify which software revision is in use. This omission runs afoul of Article 11 (Technical Documentation), which requires clear system identification for compliance tracking.
The EU AI Act, slated to become enforceable in August 2026, applies to any AI system—including the MCP servers that Anthropic’s products connect to—when deployed in the European Economic Area. Specifically, Article 9(2)(a) demands that providers identify known and foreseeable risks; the audit’s findings demonstrate that unrestricted filesystem access without validation is a foreseeable risk. Article 13(3)(b) calls for comprehensive instructions for use, which the missing tool descriptions violate. Article 15(1) requires high‑risk AI systems to achieve robust cybersecurity, yet the 28 unconstrained parameters erode that robustness. Finally, Article 17(1)(d) mandates data‑governance practices, and the lack of parameter constraints signals a gap in Anthropic’s data‑quality management.
Anthropic’s public messaging has emphasized the transformative impact of Claude Code on software development, with VentureBeat noting the company’s push toward broader enterprise adoption through “Claude Cowork” (VentureBeat). However, the compliance shortfall uncovered by the audit casts doubt on the readiness of Anthropic’s infrastructure for the regulatory environment that will soon govern AI deployments in Europe. If the company does not remediate the filesystem server’s deficiencies—by adding tool documentation, enforcing input validation, implementing safeguards for destructive operations, and publishing version metadata—it risks facing enforcement actions, fines, or forced service restrictions under the EU AI Act. Stakeholders, from enterprise customers to regulators, will be watching closely as Anthropic either patches the identified gaps or confronts the legal ramifications of non‑compliance.
Sources
No primary source found (coverage-based)
- Dev.to AI Tag
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.