Skip to main content
OpenAI

OpenAI scientist trusts AI for experiments but says it can't design complex systems

Published by
SectorHQ Editorial
OpenAI scientist trusts AI for experiments but says it can't design complex systems

Photo by Levart_Photographer (unsplash.com/@siva_photography) on Unsplash

Once writing every line himself, OpenAI chief scientist Jakub Pachocki now lets AI run week‑long experiments in minutes—yet, The‑Decoder reports he still doubts AI can design complex systems.

Key Facts

  • Key company: OpenAI

OpenAI’s chief scientist Jakub Pachocki says the newest generation of coding models has already cut the turnaround time for routine experiments from a full work week to a single weekend, a shift he described to MIT Technology Review as “hard to argue with.” He now relies on tools such as Codex to scaffold boilerplate code, run hyperparameter sweeps, and generate test harnesses, allowing him to focus on hypothesis formulation and result interpretation. The productivity boost, however, is bounded by what Pachocki calls “the level where I would just let it take the reins and design the whole thing,” indicating that, in his view, the models still lack the creative reasoning required for end‑to‑end system architecture (The‑Decoder).

The distinction matters because OpenAI is betting on a longer‑term vision of autonomous research agents. In a rollout announced last fall, the company introduced an “AI research intern” that can be delegated tasks that would otherwise take a person several days; the intern is slated to ship in September, according to The‑Decoder. By March 2028, OpenAI plans to field a full “AI Researcher,” a multi‑agent system capable of tackling problems across mathematics, physics, biology, chemistry, and even economics and politics. Pachocki stresses that humans will still set goals and supervise outcomes, but he envisions a future “where you kind of have a whole research lab in a data center” (The‑Decoder).

Pachocki warns that scaling such systems could concentrate unprecedented power. If a handful of engineers can command data‑center‑scale research that replaces massive human organizations, the balance of influence between tech firms and broader society could shift dramatically. He frames this as a double‑edged sword: the efficiency gains are clear, yet the governance challenges are “extremely concentrated” and “in some ways unprecedented” (The‑Decoder). The implication for competitors is stark—any firm that can harness autonomous agents may outpace rivals in both speed of discovery and cost efficiency.

While Pachocki’s optimism about the research intern is tempered by his skepticism toward fully autonomous design, the broader OpenAI product roadmap reflects a parallel push toward user‑facing integration. Recent reports from Reuters and The Verge note that OpenAI is developing a desktop “superapp” that would bundle ChatGPT, Codex, and other AI services into a single interface, streamlining workflows for developers and non‑technical users alike. Although the superapp is not directly tied to the research intern, both initiatives underscore the company’s strategy to embed AI deeper into everyday tooling, reducing friction between idea generation and implementation.

The practical takeaway for enterprises is that AI‑assisted coding is already reliable enough for repetitive, well‑defined tasks, but the technology is not yet a substitute for human ingenuity in system design. As Pachocki puts it, “AI tools aren’t a silver bullet for every developer; how useful they are depends on the person and the task” (The‑Decoder). Companies that adopt these tools now can expect faster prototyping cycles, yet they should retain human oversight for architecture decisions until the promised multi‑agent researcher materializes in the next few years.

Sources

Primary source

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories