Mythos Unites Apple, Google, Microsoft in Anthropic’s Project Glasswing to Safeguard
Photo by Possessed Photography on Unsplash
Apple, Google and Microsoft have joined Anthropic’s Project Glasswing, using its unreleased Mythos model to hunt thousands of software vulnerabilities, Zdnet reports, as twelve tech rivals collaborate on a “Manhattan Project”‑style AI defense.
Key Facts
- •Key company: Mythos
- •Also mentioned: Anthropic, Apple, Microsoft
Anthropic’s Mythos model, still under wraps, is being repurposed as a large‑scale vulnerability scanner. According to Zdnet, the consortium will feed the model a curated corpus of open‑source code, binary artifacts, and software‑bill of‑materials (SBOMs) so that Mythos can generate “synthetic attack vectors” and rank findings by exploitability. The approach mirrors recent research that uses transformer‑based code‑understanding models to predict security‑relevant patterns, but Anthropic is scaling it to a “thousands‑of‑vulnerabilities” throughput that would overwhelm conventional static analysis pipelines.
The three platform giants are contributing distinct resources to the effort. Apple is providing access to its internal code‑signing infrastructure and a set of proprietary iOS and macOS binaries, enabling Mythos to learn the idiosyncrasies of Apple’s hardened runtime environment. Google is contributing its internal “Borg” workload scheduler logs and a snapshot of the Chrome open‑source repository, which will let the model map privilege‑escalation pathways across containerized workloads. Microsoft is supplying telemetry from its Defender for Cloud suite and a curated set of Windows kernel symbols, giving Mythos visibility into low‑level privilege‑grant mechanisms on the most widely deployed desktop OS. Zdnet notes that the collaboration is structured as a “Manhattan Project‑style” joint venture, with each company retaining ownership of its data while sharing the aggregated findings through a secure, multi‑party compute enclave.
Anthropic plans to run Mythos in a federated inference mode, meaning the model’s weights remain on Anthropic’s own hardware while the code samples stay within each partner’s perimeter. This design mitigates the risk of exposing proprietary source to external parties and complies with the strict data‑handling policies that govern each firm’s internal security programs. The output—ranked vulnerability tickets—will be fed into the partners’ existing bug‑bounty pipelines, where human analysts can verify exploit feasibility before issuing patches. Zdnet emphasizes that the goal is not to replace manual code review but to augment it with a “pre‑screening” layer that can surface obscure bugs that traditional linters miss.
Beyond the immediate security payoff, the project is intended to generate a shared knowledge base of exploit patterns that can be retrofitted into future AI‑assisted development tools. Anthropic’s engineers aim to extract “attack signatures” from Mythos’s internal attention maps, a technique that could eventually allow compilers to flag risky constructs at compile time. The consortium also hopes to publish anonymized metrics on detection rates and false‑positive ratios, providing the broader security community with empirical data on the efficacy of large‑language‑model‑driven vulnerability discovery. While the initiative is still in its early testing phase, Zdnet reports that the partners have already identified several high‑severity CVEs in legacy libraries that had evaded detection for years.
The collaboration underscores a shift in how the industry tackles software risk: rather than competing on isolated bug‑bounty programs, the biggest players are pooling AI capabilities to stay ahead of increasingly automated threat actors. As Anthropic’s Mythos matures, its ability to synthesize novel exploit chains could become a critical defensive asset, especially for the “most critical software” that underpins global communications, finance, and infrastructure. The project’s success will hinge on balancing the model’s predictive power with rigorous validation, a challenge that the three firms appear prepared to meet through their combined expertise and shared commitment to secure software supply chains.
Sources
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.