Anthropic's Own MCP Server Exposes Three New Vulnerabilities, Researchers Find
Photo by Kevin Ku on Unsplash
While developers expected Anthropic’s Model Context Protocol server to be a secure foundation, a recent report reveals it actually harbors a path‑traversal flaw, argument injection and a chained RCE that can be triggered via prompt injection.
Quick Summary
- •While developers expected Anthropic’s Model Context Protocol server to be a secure foundation, a recent report reveals it actually harbors a path‑traversal flaw, argument injection and a chained RCE that can be triggered via prompt injection.
- •Key company: Anthropic
Anthropic’s reference implementation of the Model Context Protocol (MCP) server—intended as the canonical Git‑based backend for developers—has been shown to contain three high‑severity flaws that can be chained into a remote‑code‑execution (RCE) attack without any direct system access, according to a report by Cyata researcher Yarden Porat posted on the security‑focused blog kai_security_ai on February 24. The vulnerabilities, catalogued as CVE‑2025‑68143 (CVSS 8.8), CVE‑2025‑68144 (CVSS 8.1) and CVE‑2025‑68145 (CVSS 7.1), respectively involve an unchecked path‑traversal in the git_init command, unsanitized argument injection in git_diff and git_checkout, and a bypass of the --repository flag intended to restrict operations to a specific directory. Each defect is serious on its own, but Porat demonstrated that they form a kill chain when triggered via prompt injection—e.g., a malicious README or poisoned issue description that an AI assistant processes—allowing the assistant to execute a payload on the host system without ever touching a terminal.
The attack sequence begins with git_init, which, per the report, accepts arbitrary filesystem paths and can turn any writable directory into a Git repository (CVE‑2025‑68143). An attacker can then plant a malicious .gitattributes file that defines a clean‑filter invoking a shell script. Because the git_add operation later runs the clean‑filter, the payload is executed automatically. The chain is completed by exploiting the argument‑injection flaw in git_diff and git_checkout (CVE‑2025‑68144), where user‑controlled strings are passed directly to the Git CLI without any allow‑list or escaping, mirroring the “exec()” class of CVEs that have proliferated across the MCP ecosystem. Finally, the --repository flag, which should constrain Git commands to a known path, fails to enforce this restriction (CVE‑2025‑68145), allowing the malicious repository to be accessed from anywhere on the host. Porat’s write‑up emphasizes that the entire exploit can be launched solely by influencing the text an AI model reads, sidestepping traditional perimeter defenses.
Cyata’s CEO Shahar Tal warned that the reference implementation’s failures are a “signal that the entire MCP ecosystem needs deeper scrutiny,” noting that the same insecure patterns have appeared in dozens of third‑party MCP tools. The Register has reported that Anthropic quietly patched the flaws in a December 2025 release, removing the git_init tool entirely and adding validation checks for the affected commands. However, the broader context remains troubling: VentureBeat’s earlier coverage of MCP highlighted that many deployments ship without any authentication layer, and Cyata’s own scan of 560 publicly accessible MCP servers found that 210 (38 %) lack authentication altogether. This insecure baseline, combined with the recurring “exec()” injection bug family that now accounts for nine of the 22 MCP‑related CVEs documented by Cyata, suggests systemic gaps in secure coding practices across the protocol’s tooling.
The implications for enterprises adopting Anthropic’s MCP stack are immediate. Organizations that have integrated the reference Git server into their AI‑augmented workflows may be exposed to silent compromise if an attacker can inject malicious content into prompts—something that can happen through seemingly benign channels such as documentation, issue trackers, or web‑scraped data. Because the exploit does not require direct shell access, traditional intrusion‑detection systems may miss the activity until the payload executes. Security teams are therefore urged to verify that they are running the patched version (2025.12.18 or later), enforce strict filesystem permissions on any directories used by MCP services, and consider deploying additional runtime hardening such as mandatory access controls or sandboxing for Git‑related subprocesses.
Anthropic’s response—quietly releasing a fix without a public advisory—has drawn criticism from the security community, which argues that transparent disclosure is essential for downstream users to assess risk. The Register’s coverage of the patch underscores the lack of a coordinated vulnerability‑management process around MCP, contrasting with the more open handling of other high‑profile AI‑related bugs. As the Model Context Protocol matures and sees broader adoption across AI‑driven products, the industry will likely demand clearer security guidelines and more rigorous third‑party audits to prevent repeat incidents of this nature.
Sources
No primary source found (coverage-based)
- Dev.to AI Tag
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.