Claude Code lets you run coding agent sessions on any device, tracking 30+ sessions
Photo by ThisisEngineering RAEng on Unsplash
While most users assume their AI coding assistants run fleetingly in the cloud, a recent report shows Claude Code and Codex CLI silently amass 775 full agentic sessions—over 3 GB of logs—on personal Macs.
Quick Summary
- •While most users assume their AI coding assistants run fleetingly in the cloud, a recent report shows Claude Code and Codex CLI silently amass 775 full agentic sessions—over 3 GB of logs—on personal Macs.
- •Key company: Claude Code
- •Also mentioned: Anthropic
Claude Code’s local logging architecture has revealed a trove of agentic execution data that most developers never see. A self‑audit of the author’s machines uncovered 775 distinct coding sessions stored in hidden directories—~3 GB of JSON‑formatted logs that capture every prompt, model reasoning step, tool invocation, environment response, and error‑retry cycle. The Mac Mini alone held 574 sessions (3.1 GB, 1,103 files) while a MacBook stored 99 Claude sessions (652 MB, 316 files) and 79 Codex CLI sessions (2.4 GB, 3,530 files), together accounting for 41 million tokens of real‑world code‑generation activity. According to the original report, the logs are “complete (state → action → reward → next state) tuples,” the exact data format that reinforcement‑learning researchers prize for training next‑generation agents.
Anthropic’s recent “Remote Control” preview extends Claude Code beyond the terminal, letting users attach a smartphone, tablet, or web browser to a session that continues to run on the local machine. The connection is brokered through claude.ai/code or the Claude iOS/Android apps, and the session persists as long as the host computer remains online; if the network drops, the session automatically reconnects but terminates after roughly ten minutes of offline time. This capability is currently limited to Max‑tier subscribers, with Pro users slated to receive access next, and it differs from the cloud‑based Claude Code offering that has been running tasks in Anthropic’s data centers since last year. By keeping the execution environment on the user’s hardware, the remote‑control feature preserves access to local files, servers, and project configurations while avoiding any data transfer to Anthropic’s cloud, a point emphasized in the company’s research‑preview announcement.
The sheer volume of locally stored agentic logs raises a strategic question for the AI‑coding market: how much of the “missing training signal” is being silently discarded? The report notes that Claude Code automatically purges logs after 30 days, but users can extend the retention window by editing ~/.claude/settings.json to set cleanupPeriodDays to a large value (e.g., 36 500 days). If extrapolated to thousands of developers, the 775 sessions observed would translate into “hundreds of billions of tokens of real agentic trajectory data,” a dataset that currently has no public equivalent such as The Pile. Anthropic’s internal use of this data for model improvement is hinted at (“Big labs use this data internally”), suggesting a competitive advantage for the company if it can leverage these high‑fidelity interaction traces without exposing them to external scrutiny.
User experience data also underscores the importance of Claude Code’s “Plan Mode,” a workflow that forces the model to read the codebase, ask clarifying questions, and present a step‑by‑step plan before making any changes. One developer tracked over 30 sessions and found that without Plan Mode, roughly 40 % of attempts ended in a full undo and restart, often after the model introduced breaking changes across multiple files. By contrast, enabling Plan Mode reduced the redo rate to near zero, as the model’s preliminary analysis prevented mis‑assumptions about project structure. The workflow—triggered by a double Shift+Tab or the /plan command—has become the author’s default for any non‑trivial task, highlighting a practical mitigation strategy for the brittleness that can arise in autonomous code generation.
Anthropic’s broader product push includes automated code reviews and tighter GitHub integration, signaling an ambition to make Claude Code a full‑stack development assistant. The company is simultaneously raising a $10 billion round at a $350 billion valuation, according to the remote‑control announcement, positioning itself to scale both the service and the underlying data pipeline. As the ecosystem watches whether Claude Code can sustain its recent momentum—documented in coverage by ZDNet, The Verge, and VentureBeat—the hidden logs on developers’ machines may become a pivotal asset, offering a rare glimpse into the long‑horizon planning and error‑recovery signals that next‑generation coding agents will need to master.
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.