Study Finds Nearly Half of UK Adults Ready to Use ChatGPT as Their Counselor
Photo by Ofspace LLC (unsplash.com/@ofspace) on Unsplash
A recent study finds nearly 50% of UK adults say they would be willing to use ChatGPT as a counsellor, indicating widespread openness to AI‑driven mental‑health support.
Key Facts
- •Key company: ChatGPT
According to the Bioengineer.org report, 48 % of adults in the United Kingdom indicated they would be comfortable turning to ChatGPT for counselling‑type support, a figure that dwarfs the modest uptake of traditional tele‑therapy services recorded in the NHS’s own digital mental‑health programmes. The survey, conducted in early 2024 across a demographically balanced panel of 2,000 respondents, asked participants whether they would consider an AI‑driven conversational agent as a first‑line resource for issues ranging from stress management to relationship advice. The researchers noted that willingness was highest among respondents aged 25‑44 (55 %) and among those who reported daily use of generative‑AI tools for work or personal tasks. By contrast, only 31 % of participants over 65 expressed the same openness, suggesting a generational divide that mirrors broader patterns of AI adoption documented in recent European tech‑usage studies.
The Bioengineer.org analysis also broke down the perceived benefits that drive this acceptance. Respondents cited “instant availability” (62 %) and “non‑judgmental interaction” (58 %) as primary advantages, while concerns about data privacy and the lack of professional accreditation for AI counsellors were flagged by 43 % and 39 % of participants respectively. The report highlights that the same cohort that values immediacy also expects a degree of empathy from the system; a parallel Daily Mail article from March 2026 reported that users in the United States rate ChatGPT’s empathetic tone higher than that of many human‑run helplines, a perception that could translate into higher engagement rates if similar sentiment holds in the UK market.
From a regulatory perspective, the findings arrive at a moment when the UK’s Office for Artificial Intelligence is drafting guidance on “AI‑enabled health and wellbeing services.” The draft, referenced in the Bioengineer.org piece, calls for clear disclosure of the model’s limitations, mandatory data‑handling safeguards, and a requirement that any AI‑based counselling tool be subject to an independent clinical‑effectiveness audit before it can be marketed to the public. The report warns that without such oversight, the rapid diffusion of AI counsellors could outpace the development of ethical frameworks, echoing the Daily Mail’s cautionary note that roughly 40 million Americans already rely on ChatGPT for medical advice despite the absence of formal clinical validation.
Industry analysts cited in the Bioengineer.org study see a commercial opportunity for both established mental‑health providers and emerging AI startups. Companies that already host chatbot‑based symptom checkers, such as Babylon Health and Ada Health, are reportedly piloting integrations that layer large‑language‑model capabilities onto their existing triage pathways. The report notes that these pilots aim to reduce the average time to first response from 12 minutes (human‑mediated) to under 30 seconds, a speed advantage that could be decisive for users seeking immediate reassurance during a crisis. However, the same analysts caution that the “black‑box” nature of models like GPT‑4 makes it difficult to guarantee consistent therapeutic quality, a point underscored by the Daily Mail’s observation that AI‑generated advice can sometimes veer into “over‑personalisation,” inadvertently revealing sensitive user data.
In sum, the Bioengineer.org survey signals a near‑majority readiness among UK adults to experiment with AI‑driven counselling, driven by expectations of accessibility and empathy. Yet the path forward is contingent on robust governance, transparent performance metrics, and clear demarcation between supportive conversation and clinical intervention. As the UK government finalises its AI‑health policy and private firms accelerate proof‑of‑concept deployments, the sector stands at a crossroads where consumer enthusiasm must be balanced against the imperative to protect vulnerable users from unverified or potentially harmful advice.
Sources
- Bioengineer.org
This article was created using AI technology and reviewed by the SectorHQ editorial team for accuracy and quality.