Anthropic’s AI uncovers 12 hidden OpenSSL bugs and bluntly tells Pentagon to go f***
Photo by ThisisEngineering RAEng on Unsplash
Anthropic’s Claude Code Security AI independently discovered twelve zero‑day OpenSSL bugs—some dating back to 1998—and bluntly warned the Pentagon to stop using the vulnerable library, reports indicate.
Quick Summary
- •Anthropic’s Claude Code Security AI independently discovered twelve zero‑day OpenSSL bugs—some dating back to 1998—and bluntly warned the Pentagon to stop using the vulnerable library, reports indicate.
- •Key company: Anthropic
- •Also mentioned: OpenSSL
Claude Code Security flagged twelve zero‑day flaws in OpenSSL ahead of the library’s January 2026 patch, according to the AI‑security report posted on Feb 28 by zecheng. The most severe, CVE‑2025‑15467, is a stack‑buffer overflow in CMS message parsing that NIST rates 9.8 CVSS, potentially exploitable without a valid key. Three of the bugs date back to 1998‑2000 and survived years of fuzzing by firms including Google, the report notes.
Anthropic’s AI also generated patches for five of the vulnerabilities, which were incorporated directly into the official OpenSSL release, zecheng adds. The company announced that Claude Code Security has already uncovered more than 500 flaws across open‑source codebases, a claim echoed by VentureBeat’s coverage of the tool’s launch.
In a separate encounter, Anthropic CEO Dario Amodei rebuffed a Pentagon demand for unrestricted AI access, according to Defragzone. The defense secretary had given Amodei until 5:01 p.m. Friday to hand over “unlimited” Claude capabilities, but Amodei held firm on the company’s red lines: no AI‑controlled autonomous weapons and no mass domestic surveillance. He declined the request, labeling it incompatible with Anthropic’s ethical stance.
The clash underscores a growing tension between government demand for powerful AI tools and the industry’s self‑imposed safeguards. Bruce Schneier, cited by zecheng, warned that AI‑driven vulnerability discovery is reshaping cybersecurity faster than expected, highlighting the dual‑use risk of the same technology that can both expose and protect code.
Developers are now urged to treat AI‑assisted security reviews as essential, not optional, zecheng writes. With Claude already identifying flaws that eluded decades of human testing, the message is clear: AI can find what humans miss, and ignoring it may leave critical software exposed.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.