Anthropic Says AI Job‑Loss Studies Overlook Its Role in Crippling the Internet
Photo by Team Kiesel (unsplash.com/@team_kiesel) on Unsplash
While recent studies tout AI’s tidy, task‑by‑task job‑displacement forecasts, 404 Media reports Anthropic warns those numbers ignore AI’s biggest disruption—its role in flooding the internet with porn and “slop.”
Key Facts
- •Key company: Anthropic
Anthropic’s own “Labor market impacts of AI” paper, released earlier this month, attempts to pair Claude’s capabilities with existing job tasks, producing a viral chart that has been dissected by tech journalist Christopher Mims and featured in Phillip Bump’s newsletter, according to 404 Media. Mims argues that the “theoretical capability” metric is largely speculative, noting that the study’s inputs are guesswork. Anthropic counters that it has introduced an “observed exposure” measure, which blends theoretical large‑language‑model (LLM) ability with real‑world usage data, weighting automated, work‑related uses more heavily. The company says this metric draws on its “Anthropic Economic Index,” a January‑long effort to catalog high‑value, work‑centric AI applications such as drafting professional correspondence, debugging code, and completing academic assignments.
However, 404 Media points out that the index omits two of the most prevalent ways people actually employ Claude: generating AI‑powered porn and producing “AI slop” – spammy, low‑quality content that floods social platforms. The outlet argues that these uses are “destroying discoverability on the internet” and inflicting “cascading societal and economic harms” on creators, adult performers, journalists, musicians, small‑business owners and others. Emanuel, the author of 404 Media’s first generative‑AI market analysis, notes that many of the most trafficked AI sites explicitly market AI‑generated porn and non‑consensual deep‑fake content, yet Anthropic’s research sidesteps these categories entirely.
The omission is not merely an academic oversight. According to 404 Media, the proliferation of AI‑generated porn and spam is eroding the value of legitimate online content, driving down ad revenue and increasing moderation costs for platforms that must police non‑consensual imagery and low‑quality spam. The outlet suggests that by ignoring these high‑volume, high‑impact applications, Anthropic’s displacement risk model underestimates the true economic damage AI can cause. Mims reinforces this view, stating that the chart’s “theoretical capability” numbers are “totally made up” because they fail to account for the ways AI is actually being used to undermine existing digital ecosystems.
Anthropic’s broader strategy, as outlined in its recent $3.5 billion fundraising round reported by TechCrunch, is to position Claude as a “good‑use” AI that fuels enterprise productivity and academic research. The company’s marketing emphasizes applications that showcase responsible AI deployment, while critics like 404 Media argue that this narrative glosses over the darker side of generative models. The tension mirrors a pattern observed across the industry, where firms highlight beneficial use‑cases to appease investors and regulators, yet the same models power illicit porn generation and mass spam, as highlighted in Wired’s commentary on the consolidation of AI power among a few dominant players.
The debate over AI’s labor impact therefore hinges on what counts as “work‑related” usage. If the metric excludes high‑volume, revenue‑draining activities such as AI‑porn creation and spam generation, the resulting displacement risk appears modest. Conversely, incorporating those activities would likely push the observed exposure scores higher, suggesting a more severe threat to both digital economies and the broader labor market. Anthropic’s own paper acknowledges the difficulty of measuring real‑world usage, but 404 Media contends that the company’s reluctance to grapple with “the world‑destroying applications that people actually use it for” undermines the credibility of its findings.
Ultimately, the conversation reflects a broader industry challenge: quantifying AI’s impact when the most visible, profitable, and harmful applications sit in a gray zone between legitimate productivity tools and illicit content farms. As Anthropic continues to expand its Claude platform under the financial backing highlighted by TechCrunch, the pressure will mount for more transparent methodologies that capture the full spectrum of AI usage—both the “good” and the “slop”—to inform policymakers, investors, and the public about the true scale of AI‑driven disruption.
Sources
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.