US drafts strict AI rules as Anthropic clash fuels regulatory overhaul
Photo by ThisisEngineering RAEng on Unsplash
According to a recent report, the United States is drafting stringent AI regulations after a high‑profile dispute with Anthropic, prompting a sweeping overhaul of the nation’s AI oversight framework.
Key Facts
- •Key company: Anthropic
The draft rules, unveiled by the White House Office of Science and Technology Policy, would impose a “risk‑based” licensing regime on advanced foundation models that exceed a certain parameter threshold, according to the Financial Times. The framework calls for mandatory impact assessments, third‑party audits, and a “pre‑deployment safety review” for any system that could generate disinformation, manipulate public opinion, or be used in critical infrastructure. Developers would also be required to embed “traceability logs” that record model architecture, training data provenance, and any fine‑tuning steps, enabling regulators to reconstruct the model’s decision pipeline if needed. The policy memo, obtained by the FT, notes that non‑compliance could trigger civil penalties of up to 5 percent of annual revenue, mirroring the EU’s AI Act enforcement model.
The catalyst for the accelerated timeline was a public dispute between Anthropic and the U.S. administration over the company’s request for an exemption from the nascent “high‑risk” classification. Anthropic, which recently secured a $4 billion investment from Amazon, argued that its Claude‑3 model, while powerful, incorporated “robust alignment safeguards” that rendered the generic risk thresholds overly broad, the FT reports. The administration, however, contended that the exemption could set a precedent that undermines the uniformity of the oversight regime, prompting a “clash” that forced policymakers to tighten language around exemptions and to require explicit, case‑by‑case justification from any firm seeking relief.
The Verge adds that the draft also earmarks a new inter‑agency AI Safety Board, composed of representatives from the Department of Commerce, the National Institute of Standards and Technology, and the Federal Trade Commission, to coordinate enforcement and to publish quarterly “model risk scores.” Those scores would be derived from a standardized set of metrics—including robustness to adversarial attacks, propensity for biased outputs, and energy consumption—allowing the government to flag models that cross predefined risk thresholds. The board would have the authority to issue “stop‑gap” orders, temporarily halting the deployment of a model while a deeper technical review is conducted, a mechanism reminiscent of the FDA’s emergency use authorizations for medical devices.
Industry reaction, as captured by The Verge, has been mixed. Large AI labs such as OpenAI and Google have signaled willingness to cooperate, noting that the “traceability” and “audit” provisions align with internal governance practices they already employ. Conversely, smaller startups fear that the compliance burden—particularly the requirement for third‑party audits and the maintenance of extensive provenance logs—could divert scarce engineering resources and stifle innovation. The Verge points out that the draft includes a “sandbox” provision, permitting limited‑scale testing of high‑risk models under controlled conditions, but the criteria for sandbox eligibility remain vague, leaving many firms uncertain about how to qualify.
If enacted, the regulations would represent the most comprehensive federal AI oversight to date, superseding the patchwork of sector‑specific guidelines that have governed AI use in finance, healthcare, and defense. According to the Financial Times, the administration plans to release a final rulebook by the end of the calendar year, after a public comment period that is expected to generate “substantial feedback” from both industry and civil‑society groups. The outcome of this process will determine whether the U.S. adopts a prescriptive, enforcement‑heavy model akin to the EU, or a more flexible, innovation‑friendly approach that relies on voluntary compliance and market incentives.
Sources
- Financial Times
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.