OpenAI bows to Pentagon, allowing AI-powered surveillance under new agreement
Photo by Zac Wolff (unsplash.com/@zacwolff) on Unsplash
Just weeks after touting a “responsible AI” stance, OpenAI has signed a deal letting the Pentagon tap its models for surveillance, The Verge reports.
Key Facts
- •Key company: OpenAI
OpenAI’s new contract with the Pentagon, disclosed in a brief statement Friday evening, hinges on a single, legally vague clause: the government may employ OpenAI models for “any lawful use.” According to a source familiar with the negotiations, that phrasing is the only substantive concession OpenAI made, allowing the Defense Department to tap the company’s generative‑AI tools for bulk data collection and analysis without breaching the firm’s publicly‑stated safety principles (The Verge). The source added that the wording mirrors decades‑old interpretations of U.S. law that have historically permitted expansive surveillance programs, effectively giving the military a backdoor to apply AI to mass‑monitoring tasks that Anthropic had refused to support.
The deal emerged amid a high‑profile standoff between the Department of Defense and Anthropic, which was recently blacklisted after refusing to grant the Pentagon permission for “mass surveillance of Americans” and “lethal autonomous weapons” (The Verge). OpenAI’s CEO Sam Altman attempted to frame the agreement as a win for both sides, tweeting that the Department of War “agrees with our two most important safety principles” and that the contract “reflects them in law and policy” (The Verge). Critics, however, quickly pointed out the inconsistency: the same “law and policy” that the Pentagon has historically stretched to justify sweeping surveillance now appears to be the basis for OpenAI’s concession. Miles Brundage, OpenAI’s former head of policy research, warned on X that the language suggests the company “caved + framed it as not caving, and screwed Anthropic while framing it as helping them” (The Verge).
OpenAI’s official spokesperson, Kate Waters, pushed back against the alarmist narrative, insisting that the agreement does not grant the military the authority to collect or analyze Americans’ data “in a bulk, open‑ended, or generalized way” (The Verge). She emphasized that the contract limits the use of AI to specific, mission‑focused applications and that any deployment would still be subject to existing legal safeguards. Nevertheless, the underlying clause—“any lawful use”—means that if a future statute or executive order expands the definition of lawful surveillance, the Pentagon could immediately leverage OpenAI’s models without renegotiating terms.
Industry observers note that the deal underscores a broader shift in how AI firms are navigating government contracts. TechCrunch reported that OpenAI’s announcement was accompanied by a more detailed follow‑up, in which the company outlined potential use cases such as “geolocation data layering” and “pattern‑recognition across large datasets” to aid military intelligence (TechCrunch). These capabilities, while technically permissible under current law, raise concerns about the erosion of the “human responsibility for the use of force” principle that Altman highlighted in his initial statement (The Verge). If the Pentagon can employ OpenAI’s models to sift through massive streams of civilian data, the practical effect may be a de‑facto expansion of domestic surveillance under the guise of national security.
The controversy also spotlights the competitive dynamics among AI vendors vying for defense contracts. Anthropic’s refusal to compromise on its red lines has positioned it as a principled outlier, but the company now faces the prospect of being sidelined from lucrative government work. OpenAI’s willingness to accept the “any lawful use” language may secure short‑term revenue and cement its role as the primary AI supplier to the DoD, yet it risks alienating a growing segment of the AI community that is increasingly wary of unchecked military applications. As Altman’s tweet suggested, the company believes it can “keep those same limits” while still enabling the Pentagon’s objectives, but the reality, as multiple sources have highlighted, is that the legal wording effectively sidesteps the very limits Anthropic fought to preserve.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.