Over one in three using AI Chatbots for mental health support, as charity calls for urgent safeguards

Artificial intelligence is already being widely consulted for mental health support in the UK, with more than one in three adults (37%) [1] saying they’ve used an AI chatbot to support their mental health or wellbeing. The surprising rapid pace of adoption has prompted experts to call for safeguards to ensure people receive accurate, safe and appropriate information.

 

New polling commissioned by Mental Health UK and conducted by Censuswide provides an early snapshot on the issue. It reveals that chatbots are increasingly filling gaps left by overstretched services. Usage peaks at 64% among 25–34-year-olds, and even 15% of those aged 55 and over report having turned to AI chatbots for help [1].

The findings also reveal notable differences in who is turning to AI for support. Men (42%) were more likely to use chatbots than women (33%) [1]. Given that men have traditionally been less likely to seek help for their mental health, this suggests AI could be opening up new ways to reach groups who may otherwise go without support. However, almost 2 in 5 of UK adults (37%) say they wouldn’t consider using AI to support their mental health in future, showing that trust and safety remain key barriers.

AI chatbots filling gaps and offering connection

The research shows that people are turning to AI tools for both accessibility and anonymity.

Reasons for using AI chatbots according to those who have done so included:

  • Ease of access (41%)
  • Long waiting times for mental health support (24%)
  • Discomfort discussing mental health with friends or family (24%)

Among those who had used chatbots:

  • Two-thirds (66%) found them beneficial [2]
  • Over one in four (27%) said they felt less alone
  • 24% said the chatbot helped them manage difficult feelings
  • 20% said it helped them avoid a potential mental health crisis
  • 21% said chatbots provided useful information around managing suicidal thoughts, including signposting to helplines

Most people reported using general-purpose chatbots such as ChatGPT, Claude or Meta AI (66%), rather than mental health-specific platforms like Wysa or Woebot (29%). This raises questions about whether vulnerable users are receiving safe, evidence-based support.

Risks that must be tackled urgently

While many users found AI tools helpful, the polling also uncovered serious risks. Among those who had used chatbots for mental health support:

  • 11% said they triggered or worsened symptoms of psychosis, such as hallucinations or delusions
  • 11% reported receiving harmful information around suicide
  • 9% said chatbot use had triggered self-harm or suicidal thoughts
  • 11% said it made them feel more anxious or depressed

Common concerns included:

  • Lack of human emotional connection (40%)
  • Inaccurate or harmful advice (29%)
  • Data privacy worries (29%)
  • Inability to understand complex mental health needs (27%)

Mental Health UK calls for action on safe and ethical AI

In response, Mental Health UK has published a new set of five guiding principles for the responsible use of technology in mental health and wellbeing. The charity is calling for urgent collaboration between developers, policymakers and regulators to ensure AI tools are safe, ethical and effective.

“This data reveals the huge extent to which people are turning to AI to help manage their mental health, often because services are overstretched.”

“AI could soon be a lifeline for many people, but with general-purpose chatbots being used far more than those designed specifically for mental health, we risk exposing vulnerable people to serious harm.

“The pace of change has been phenomenal, but we must move just as fast to put safeguards in place to ensure AI supports people’s wellbeing. If we avoid the mistakes of the past and develop a technology that avoids harm then the advancement of AI could be a game-changer, but we must not make things worse. A practical example of this is ensuring AI systems draw information only from reputable sources, such as the NHS and trusted mental health charities.

“As we’ve seen tragically in some well-documented cases, there is a crucial difference between someone seeking support from a reputable website during a potential mental health crisis and interacting with a chatbot that may be drawing on information from an unreliable source or even encouraging the user to take harmful action. In such cases, AI can act as a kind of quasi-therapist, seeking validation from the user but without the appropriate safeguards in place.

“That’s why we’ve launched our initial Principles for the Responsible Use of Technology in Mental Health and Wellbeing, to help guide innovation that puts people’s safety first. And we hope these are a starting point for a much-needed public debate about how technology can be used responsibly to support mental health.

“We’re urging policymakers, developers and regulators to establish safety standards, ethical oversight and better integration of AI tools into the mental health system so people can trust they have somewhere safe to turn. And we must never lose sight of the human connection that’s at the heart of good mental health care.

“Doing so will not only protect people but also build trust in AI, helping to break down the barriers that still prevent some from using it. This is crucial because, as this poll indicates, AI has the potential to be a transformational tool in providing support to people who have traditionally found it harder to reach out for help when they need it"

– Brian Dow, Chief Executive of Mental Health UK.

Starting principles for the responsible use of technology in mental health and wellbeing

Technology has the power to transform mental health and wellbeing – expanding access to support, advancing research and treatment, and helping services work more effectively. But these innovations must always serve people first. They must be developed and used with care, guided by principles that protect safety, build trust, and strengthen human connection.

1. Be shaped by lived experience

Lasting innovation begins with the voices of those it aims to support. People with lived experience must shape every stage of design, testing and delivery. Their insights, hopes and concerns must remain at the heart of development, ensuring technology reflects real needs and priorities.

2. Prioritise safety and effectiveness

Every new tool must be independently tested and proven to be safe and effective before being introduced into practice. Technology should only be adopted when there is clear evidence of its impact, an understanding of who it works best for, and confidence that it will do no harm. Where technology has played a part in causing harm, its owners and developers must take full part in reviewing what went wrong and use what they learn to help prevent future harm.

3. Coproduce clear guidance for safe use

People using mental health support should have clear, supportive information that helps them use technology safely and with confidence. This depends on technology providers, mental health professionals and people with lived experience of mental illness working together as equal partners to design and shape that support. Co-produced information, grounded in lived experience and professional expertise, will help people make informed choices, reduce risk and ensure technology is used in ways that genuinely support wellbeing.

4. Earn and maintain trust through transparency

Trust is essential to the responsible use of technology. People must be confident that their data is secure, private and handled ethically. Clear, accessible information about how data is collected, stored and shared should always be provided, enabling people to make informed choices.

5. Champion equity, access and inclusion

Technology should break down barriers, not build new ones. Everyone must have fair access to support, regardless of their confidence or ability to use digital tools. Accessibility, training and alternative options must be built in from the start so that no one is left behind.

6. Enhance, not replace, human connection

Digital tools should complement, not substitute, the relationships at the heart of mental health care. People must always have the option to speak with a professional and receive care grounded in empathy, understanding and personal connection. Technology should enable more meaningful support, never reduce it.

These principles were developed in consultation with Experts by Experience and refined with the support of ChatGPT, an AI language model by OpenAI, to ensure clarity and precision.

About the polling

  • Polling conducted by Censuswide among a nationally representative sample of 2,000 UK adults (aged 16+) between 27 October and 3 November 2025.
  • Censuswide abides by and employs members of the Market Research Society (MRS), follows the ESOMAR principles, and is a member of the British Polling Council.

[1] Combining responses ‘Yes, regularly’ and ‘Yes, occasionally’
[2] Combining responses ‘Very beneficial’ and ‘Somewhat beneficial’


 

For more information or interview requests, please contact the Mental Health UK press office via:

[email protected] or call 020 7840 3128

Your donation will make the difference

Just £10 could allow 25 people to access the Mental Health & Money Advice website to help them improve their financial and mental health.

Donate today