Skip to main content
Gemini

Gemini AI fuels debate as viral photo of Iran’s bombed schoolgirl graveyard sparks

Published by
SectorHQ Editorial
Gemini AI fuels debate as viral photo of Iran’s bombed schoolgirl graveyard sparks

Photo by Maxim Hopman on Unsplash

While the haunting photo of Minab’s bombed schoolgirl graveyard was hailed worldwide as stark proof of civilian deaths, Gemini AI tells us it’s not real, Theguardian reports.

Key Facts

  • Key company: Gemini

Gemini’s misidentification of the Minab burial site underscores a growing blind spot in AI‑driven fact‑checking. When users queried Google’s Gemini about the viral aerial image of freshly dug graves in Iran’s coastal town of Minab, the system responded that the photo was “from a mass burial site in Kahramanmaraş, Turkey” after the 7.8‑magnitude earthquake of February 2023. A second AI assistant, X’s Grok, offered a different provenance—claiming the picture originated from “Rorotan Cemetery in Jakarta, Indonesia, a July 2021 stock photo of Covid‑era mass burials.” Both answers were presented with confidence and even supplied “source” links, yet the references either led to dead ends or to non‑existent articles, according to The Guardian. The AI outputs were therefore not only inaccurate but also untraceable, exposing a flaw in the way large‑language models retrieve and cite evidence.

Independent verification quickly disproved the AI claims. Researchers cross‑referenced the Minab image with high‑resolution satellite data, confirming the coordinates match the Iranian cemetery rather than any site in Turkey, Indonesia, or elsewhere. Dozens of additional photographs taken from slightly different angles, as well as video footage of the same burial operation, corroborate the visual details—such as the distinctive layout of twenty‑row graves and the presence of diggers poised to continue work. No signs of digital manipulation were detected in any of the media, and the timing aligns with reports of Iranian missile strikes that killed over 100 schoolgirls in the town, a figure that has become a focal point of international condemnation of the US‑Israeli campaign against Iran.

The Gemini and Grok errors are part of a broader wave of AI‑generated misinformation that has already complicated coverage of the Iran‑Israel conflict. Fact‑checkers have been inundated with fabricated visuals, including a purported satellite image of a US radar installation destroyed in Qatar that turned out to be a composite of older Google Earth screenshots. Another widely shared picture allegedly showing the body of Iran’s Supreme Leader Ayatollah Khamenei being pulled from rubble contained duplicated limbs—a classic sign of AI‑generated tampering. Shayan Sardarizadeh, senior journalist with the BBC Verify team, highlighted a particularly egregious fake that depicted a senior Iranian commander disguised as a woman on Tehran streets; forensic analysis revealed inconsistencies in the background architecture and street layout that betrayed its synthetic origin.

These incidents illustrate how reliance on AI for rapid verification can backfire, wasting investigative resources and potentially enabling denial of atrocities. As the Guardian notes, the “tidal wave of AI‑generated slop” forces journalists to spend additional time debunking false claims rather than focusing on original reporting. Moreover, the confidence with which models like Gemini and Grok present erroneous information may mislead non‑technical audiences into accepting false narratives, amplifying the risk of misinformation spreading unchecked across social platforms. The episode serves as a cautionary reminder that, while large‑language models excel at pattern recognition, they remain prone to hallucinations when tasked with sourcing real‑world evidence.

Sources

Primary source

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories