Artificial Intelligence Threatens Elections and Relationships, New Vote Shows
Photo by Possessed Photography on Unsplash
While many expected AI’s rapid uptake to boost society, there’s a darker flip‑side: Stanford researchers report that 53 % of people now use AI, yet harmful incidents have surged and experts warn it will damage elections and personal relationships, according to Theregister.
Key Facts
- •Key company: Artificial Intelligence
- •Also mentioned: Google, Anthropic, Bytedance
The Stanford AI Index Report, released this week, paints a stark picture of a technology that has leapt from niche labs to everyday life in a fraction of the time it took personal computers to become household staples. In just three years, AI tools are now used by 53 % of the global population, and 88 % of enterprises have deployed some form of machine‑learning system, according to the Institute for Human‑Centered Artificial Intelligence (HAI) at Stanford. That rapid diffusion, however, has outpaced the development of safety standards: the report notes that “responsible AI is not keeping pace with AI capability, with safety benchmarks lagging and incidents rising sharply” (Stanford AI Index, 2026).
The numbers back up the warning signs. The AI Incident Database logged 362 documented harms or near‑harms in 2025, a 55 % jump from the 233 cases recorded the year before. Those incidents range from bogus legal citations generated by attorneys—caught by the U.S. Sixth Circuit Court of Appeals for fabricating more than two dozen references—to AI‑driven disinformation campaigns that could sway voter sentiment. The report flags elections and personal relationships as the two domains where both experts and the American public converge on a pessimistic outlook. “AI experts and the US public disagree on nearly everything about AI’s future, except that it will hurt elections and personal relationships,” the Stanford team writes.
The election threat is not abstract. Researchers point to a surge in AI‑generated deepfakes and synthetic text that can be weaponized at scale. While the report does not quantify the exact number of election‑related incidents, the upward trajectory of overall harms suggests a growing capacity for manipulation. Coupled with the fact that 64 % of Americans already expect AI to shrink the job market over the next two decades, the political fallout could be compounded by economic anxiety—a perfect storm for misinformation to take root.
Personal relationships are also under siege. The same data that shows 80 % of university students admit to using AI tools daily hints at a cultural shift: conversations, dating apps, and even therapy sessions are increasingly mediated by algorithms. The HAI report warns that “AI lags behind people when it comes to telling time,” noting GPT‑5.4 High’s 50.6 % success rate on the ClockBench benchmark versus roughly 90 % for humans. While the anecdote about clocks may seem trivial, it underscores a broader point—AI’s occasional hallucinations (22 % to 94 % across 26 models on the AA‑Omniscient Index) can erode trust in any context where factual accuracy matters, from a partner’s birthday reminder to a political debate.
The report’s 423‑page tome, co‑authored by human researchers with assistance from ChatGPT and Claude and funded by industry heavyweights including Google and OpenAI, does not offer a silver bullet. It merely documents the widening gap between capability and governance. As the HAI team concludes, “the scarcity of responsible AI” is now a systemic risk that will shape the next wave of policy, corporate practice, and everyday interaction. If the trend continues, the very tools that promise efficiency and insight may become the vectors of societal discord—turning the promise of AI into a double‑edged sword for both democracy and intimacy.
Sources
Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.