OpenAI Reveals Chinese Agent Leveraging ChatGPT to Run Smear Operations Worldwide
Photo by Alexandre Debiève on Unsplash
OpenAI says a user tied to Chinese law enforcement tried to harness ChatGPT to orchestrate smear campaigns against Japan’s prime minister and other critics of the CCP, a case detailed by Theregister.
Quick Summary
- •OpenAI says a user tied to Chinese law enforcement tried to harness ChatGPT to orchestrate smear campaigns against Japan’s prime minister and other critics of the CCP, a case detailed by Theregister.
- •Key company: OpenAI
OpenAI’s internal threat‑intelligence team flagged a user with documented ties to Chinese law‑enforcement agencies who, in mid‑October 2025, attempted to weaponise ChatGPT for a coordinated smear campaign against Japan’s newly elected prime minister, Sanae Takaichi. According to Open‑AI’s public “malicious‑use” report, the user prompted the model to draft and amplify negative commentary on social platforms, fabricate foreign‑resident email accounts to lodge complaints with Japanese lawmakers, and construct a multi‑pronged narrative linking Takaichi to alleged “immigration problems,” “poor living conditions,” “far‑right affiliations,” and “unfair tariffs” (Theregister). When ChatGPT refused to comply, the operator switched to rival AI services to execute the plan, but later returned to the OpenAI platform to request “status‑report” edits for what the team labeled “cyber special operations” (Theregister).
The subsequent status‑report requests revealed a structured operational playbook. The user supplied OpenAI’s threat team with a draft that mirrored the original smear outline, specifying five thematic attack vectors and even naming a hashtag—#右翼 共生者 (“right‑wing symbiont”)—intended to rally Japanese influencers. OpenAI’s monitoring tools detected the hashtag’s appearance in low‑volume posts across X, the Japanese art community Pixiv, and Blogspot beginning in late October 2025, but the reach was minimal: YouTube videos garnered only single‑digit views, Pixiv posts typically recorded zero engagements, and the most‑viewed meme attracted just 108 views (Theregister).
Beyond digital propaganda, the user’s diary of “cyber special operations” documented a broader harassment strategy aimed at silencing dissidents both inside and outside China. Tactics included psychological pressure on families, livestream hijacking, and mass reporting of activists’ accounts for fabricated policy violations. One entry described the creation of a fake obituary and gravestone images for dissident Jie Lijian, which were then mass‑posted to suggest his death. Another operation targeted activist Hui Bo (@huikezhen), filing thousands of false reports to X and generating dozens of counterfeit accounts bearing his likeness to overwhelm platform moderation (Theregister).
OpenAI’s response was swift: the offending account was permanently banned, and the incident was added to a growing list of malicious uses linked to Chinese actors. Reuters has reported that OpenAI continues to uncover additional Chinese groups leveraging its models for illicit purposes, underscoring a pattern of state‑aligned actors exploiting generative AI for influence operations (Reuters). The company’s Intelligence and Investigations team, led by principal investigator Ben Nimmo, emphasized that while the specific Takaichi campaign failed to gain traction, the documented methodology—combining AI‑generated propaganda, coordinated hashtag deployment, and off‑platform harassment—mirrors tactics used in broader transnational repression campaigns (Theregister).
The episode raises pressing questions about AI governance and the limits of content‑moderation safeguards. OpenAI’s policy now mandates real‑time monitoring of high‑risk prompts and automatic escalation to its threat‑intel unit when users attempt to solicit disinformation or harassment assistance. However, the user’s pivot to alternative models after an initial refusal illustrates a “model‑hopping” risk that current detection frameworks struggle to contain. Analysts at The Verge have noted that OpenAI’s findings also intersect with concerns about its technology inadvertently training rival Chinese AI firms such as DeepSeek, suggesting a feedback loop where defensive data may be repurposed for offensive capabilities (The Verge).
In the short term, the limited diffusion of the #右翼 共生者 hashtag indicates that OpenAI’s defensive measures, combined with platform moderation on X and Pixiv, can blunt low‑scale influence attempts. Yet the documented playbook—spanning fabricated narratives, coordinated hashtag campaigns, and targeted harassment—provides a template that could be refined and amplified by more resource‑rich actors. As OpenAI and other AI developers grapple with the dual imperative of open innovation and abuse prevention, the Takaichi case serves as a cautionary benchmark for the emerging frontier of AI‑enabled state‑aligned information warfare.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.