OpenAI Teams with Google Employees to Back Anthropic in DOD Case, Highlighting SAST Blind
Photo by Zac Wolff (unsplash.com/@zacwolff) on Unsplash
14 days after Anthropic’s Claude Code Security debut, OpenAI rolled out Codex Security, and both LLM‑driven scanners proved traditional SAST tools miss entire vulnerability classes, VentureBeat reports.
Key Facts
- •Key company: OpenAI
- •Also mentioned: Anthropic, Google
OpenAI’s filing of an amicus brief alongside a contingent of Google engineers in the Department of Defense’s litigation over Anthropic’s Claude Code Security underscores a rare convergence of two rival AI powerhouses on a shared technical grievance. According to the brief posted on Microsoft’s legal portal, the coalition argues that the Pentagon’s reliance on traditional static application security testing (SAST) tools violates antitrust principles because those tools are “structurally blind” to entire classes of vulnerabilities that modern LLM‑based scanners can uncover (OpenAI and Google Employees File Brief Supporting Anthropic in DOD Case). The brief is the latest public manifestation of a broader industry shift: both OpenAI and Anthropic have released free, reasoning‑driven vulnerability scanners—Codex Security on March 6 and Claude Code Security fourteen days earlier—that demonstrably out‑perform pattern‑matching SAST solutions.
VentureBeat’s coverage details the technical breakthrough that fuels the legal argument. Anthropic’s research, released on February 5 alongside Claude Opus 4.6, documented more than 500 high‑severity bugs discovered in mature open‑source codebases that had survived “decades of expert review and millions of hours of fuzzing” (VentureBeat). One notable example was a heap buffer overflow in the CGIF library, identified by Claude through logical reasoning about the LZW compression algorithm—a flaw that even 100 % coverage‑guided fuzzing failed to expose. OpenAI’s Codex Security replicated this pattern‑matching blind spot using a different LLM architecture, confirming that the deficiency is not tied to a single model but to the underlying SAST paradigm.
The legal filing also highlights the competitive pressure that the two labs, whose combined private‑market valuation exceeds $1.1 trillion, are exerting on the broader security ecosystem. VentureBeat notes that the simultaneous release of free scanners forces enterprise buyers to rethink procurement math, as “neither Claude Code Security nor Codex Security replaces your existing stack” but they do “change procurement math permanently” (VentureBeat). By jointly challenging the Pentagon’s procurement standards, OpenAI and Google are signaling that the market will rapidly adopt LLM‑based reasoning tools, accelerating improvements that no single vendor could achieve alone.
Beyond the courtroom, the brief has strategic implications for the U.S. defense establishment’s AI policy. The Verge has previously reported on OpenAI’s willingness to accommodate Pentagon demands on surveillance tools, suggesting a pattern of cooperation when regulatory or contractual pressure mounts (The Verge). In this instance, however, the collaboration is not about compliance but about contesting a procurement framework that the signatories deem anticompetitive. Wired’s coverage of Google’s withdrawal from a controversial Pentagon AI project adds context: internal dissent within major tech firms over military contracts is growing, and the amicus brief may represent a calibrated effort to influence policy without direct involvement in the contested program (Wired).
Analysts will watch how the Department of Defense responds to the brief, particularly whether it will adjust its security‑tool requirements to accommodate LLM‑based scanners. If the DOD adopts reasoning‑driven testing, the ripple effect could reshape the entire enterprise security market, compelling legacy SAST vendors to integrate AI reasoning or risk obsolescence. For now, the brief serves as a concrete illustration of how two rival AI labs can align on a technical front, leveraging their combined market clout to challenge entrenched procurement practices and accelerate the transition toward next‑generation application security.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.