Anthropic Deploys Prompt That Forces ChatGPT to Disclose All Data It Holds on Users
Photo by A. C. (unsplash.com/@3tnik) on Unsplash
According to The‑Decoder, Anthropic has introduced a prompt that compels ChatGPT to disclose every piece of user data it retains, positioning Claude as a privacy‑focused alternative amid growing backlash over OpenAI’s military contract.
Key Facts
- •Key company: Anthropic
- •Also mentioned: Anthropic
Anthropic’s new “memory export” prompt is already being used as a lever in the growing feud over OpenAI’s defense‑industry deal. According to The‑Decoder, the prompt forces ChatGPT to dump every piece of user‑specific context it has stored—preferences, personal details, instruction tweaks, and even the exact phrasing of past interactions—into a single code block that can be copied and pasted into Claude’s memory settings. The export function is currently limited to paying Claude customers, but the company has rolled it out quickly, positioning the feature as a “one‑click migration” tool for users who want to abandon ChatGPT in favor of a privacy‑first alternative (The‑Decoder).
The mechanics are deliberately blunt. The prompt tells ChatGPT, “List every memory you have stored about me… Output everything in a single code block… Do not summarize, group, or omit any entries.” Anthropic’s documentation specifies the required format—date, memory content, and a post‑code‑block confirmation of completeness—so that the exported data can be ingested by Claude without manual re‑formatting (The‑Decoder). In practice, the transfer is imperfect: custom GPTs, “Gems,” and user‑defined instructions are not captured, and the underlying memory architectures of the two platforms differ enough that gaps and errors are expected (The‑Decoder). Nevertheless, the move underscores Anthropic’s strategy of leveraging OpenAI’s reputational risk to attract users who value data sovereignty.
The timing aligns with heightened scrutiny of OpenAI’s recent $2 billion contract with the U.S. Department of Defense, a deal that Anthropic publicly declined on ethical grounds (The‑Decoder). Industry observers have noted that the contract has sparked a wave of user backlash, with many questioning whether their conversational histories are being stored on servers that could be accessed for military applications (The‑Decoder). Anthropic’s export prompt therefore serves a dual purpose: it offers a concrete tool for data extraction while framing Claude as a safer haven for privacy‑concerned customers.
Anthropic’s broader product roadmap hints at a deeper commitment to user control. The company’s latest Claude Opus 4.6 release, highlighted by VentureBeat, expands context windows to one million tokens and introduces “agent teams” that can coordinate tasks across longer stretches of conversation (VentureBeat). While the memory export feature is not directly tied to these technical upgrades, the emphasis on larger context windows suggests Anthropic is building a platform where users can retain richer, self‑hosted histories without relying on OpenAI’s proprietary memory stores.
Critics caution that the export prompt may give a false sense of security. Memory systems across major AI providers are notoriously opaque, and the prompt’s requirement to “preserve my words verbatim where possible” can be undermined by internal tokenization, summarization, or pruning algorithms that are not publicly disclosed (The‑Decoder). For users whose accounts span multiple topics—from personal journaling to professional code assistance—Anthropic recommends disabling memory altogether rather than attempting a piecemeal migration (The‑Decoder). The trade‑off remains: convenience and personalized responses versus the risk of leaving a detailed personal dossier on a competitor’s servers.
In the short term, the prompt is likely to fuel a wave of data‑export requests that could expose the scale of information OpenAI retains on individual users. If the exported dumps reveal extensive profiling, regulators and privacy advocates may intensify calls for clearer data‑handling policies. Anthropic’s gamble is that the visibility of such exports will drive users toward Claude, reinforcing the company’s narrative of ethical AI stewardship while simultaneously challenging OpenAI to confront the privacy implications of its growing enterprise contracts.
Sources
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.