Skip to main content
Google

Google Study Finds AI Is Reshaping Human Writing Style and Substance

Published by
SectorHQ Editorial
Google Study Finds AI Is Reshaping Human Writing Style and Substance

Photo by Mitchell Luo (unsplash.com/@mitchel3uo) on Unsplash

100 participants. That’s how many researchers from Google and West‑Coast universities studied to find AI‑generated text makes human writing “more bland,” altering voice, tone and meaning, NBC News reports.

Key Facts

  • Key company: Google

The researchers, a joint team from Google DeepMind and several West‑Coast universities, recruited 100 volunteers to answer a classic prompt—“Does money lead to happiness?”—and then measured how the degree of large‑language‑model (LLM) assistance altered both the content and the cadence of the essays. Participants were split into three groups: heavy AI users (who generated more than 40 % of their text with an LLM), light users (who employed AI only for fact‑checking or minor edits), and non‑users (who avoided generative AI altogether). The study evaluated three of the most widely deployed models in 2025—Anthropic’s Claude 3.5 Haiku, OpenAI’s GPT‑5 Mini, and Google’s Gemini 2.5 Flash—under identical task conditions, according to the NBC News report [1].

Statistical analysis showed that heavy AI users produced neutral‑tone responses 69 % more often than the other two cohorts. Where non‑users and light users tended to write essays that were overtly positive or negative about the money‑happiness link, the AI‑generated texts gravitated toward a bland, balanced phrasing that stripped away the writers’ original affect. “The LLMs are pushing the essays away from anything that a human would have ever written,” said Natasha Jaques, a lead author and University of Washington computer‑science professor, emphasizing the substantive shift in argumentation [1].

Beyond meaning, the study quantified stylistic degradation. Heavy‑use essays scored lower on metrics of personal voice and creativity, and higher on formality, as measured by automated linguistic classifiers. Participants themselves reported that their final drafts felt “significantly less creative and less in their own voice,” yet their satisfaction ratings with the finished product were statistically indistinguishable from those of the light‑use and non‑use groups. This paradox—higher content blandness paired with unchanged user satisfaction—raises concerns about the long‑term impact of AI on writing habits, the authors note [1].

Jaques, who also serves as a senior research scientist at Google DeepMind, argued that an ideal LLM should act as a time‑saving assistant that preserves the author’s stylistic fingerprint. “An ideal LLM should write the essay that you would have written and just save you time. It’s not doing that at all. It’s writing a very different essay,” she said, underscoring the gap between current model behavior and user expectations [1]. The peer‑reviewed findings have been accepted to an upcoming workshop at a leading AI conference, suggesting the issue will receive further academic scrutiny.

The broader AI ecosystem is already grappling with similar “blandification” effects. Recent experiments at Google, such as AI‑only search result modes and headline‑replacing algorithms reported by Ars Technica and The Verge, illustrate a trend toward homogenized, model‑driven content across platforms. While those initiatives aim to streamline information delivery, the Google‑University study provides the first empirical evidence that heavy reliance on generative models can erode the distinctiveness of human expression in written discourse. As AI tools become more embedded in everyday workflows, the trade‑off between efficiency and individuality may become a defining challenge for both developers and users.

Sources

Primary source

Reporting based on verified sources and public filings. Sector HQ editorial standards require multi-source attribution.

More from SectorHQ:📊Intelligence📝Blog

🏢Companies in This Story

Related Stories