Artificial Intelligence and mental health

AI chatbot services such as ChatGPT and Gemini have the potential to bring access to mental health advice to more people than ever before, but they aren’t without risks. On this page, we explore the benefits and risks for mental health and wellbeing, alongside how to use AI safely.

Person having a conversation with AI on their mobile

Evolving landscape of AI in mental health support

Poor mental health is on the rise, with the cost-of-living crisis, long wait lists to access support, and media overload contributing to a worsening issue for the United Kingdom. With the introduction of Artificial Intelligence (AI) chatbots such as ChatGPT, recent findings by Mental Health UK show that more than one in three adults (37%) in the UK are turning to AI chatbots to support their mental health or wellbeing.

Generative AI models, also known as large language models or LLMs, are increasingly being used in all walks of life, including UK healthcare. For example, the 10 Year Health Plan for England aims to “make the NHS the most AI-enabled care system in the world” with AI already being used for low-risk tasks such as administration. However, this technology is still early in its development and isn’t flawless.

The term “artificial intelligence” is misleading, as they are not truly “intelligent” or capable of independent thought; generative AI models work by a complex series of mathematical calculations, predicting the most likely word to occur next in a sentence. They are trained on large amounts of data, and because of this, they can be misleading, inaccurate, or even reinforce stigmas or biases based on the data used to train them.

This means that, for complex and sensitive subjects such as mental health, AI chatbots can be dangerous tools capable of misinformation or direct harm.

However, there are some potential benefits to AI usage in mental health care, provided you keep certain things in mind.

AI terms explained

  • Generative AI Models

    Generative AI models are a type of artificial intelligence that is able to create new content (such as text, images, and video) by learning patterns from massive amounts of data.

  • Large Language Models (LLMs)

    Large Language Models are another term for Generative AI. They learn by analysing large amounts of data and can generate humanlike responses.

  • AI Tools and Apps

    AI tools and apps are services using LLMs to generate text, images, videos, or even virtual assistants. Some are specialised, while some can generate multiple types of output.

  • AI Chatbots

    AI chatbots are a common type of AI tool. They simulate human conversation by using LLMs to analyse and respond to queries.

Can AI tools help with mental health?

While AI chatbots are still in their infancy and we have limited evidence to know how effective and safe these tools are long-term, some early promising findings show that, when used regularly, they can be helpful for lowering mild anxiety and depression symptoms. They also provide benefits to accessing support for those who may otherwise experience barriers.

Benefits may include:

Singpost pointing in different directions

Access to support

The main benefit to using AI tools for mental health is their low barrier to entry. With people sometimes waiting 18 months for mental health treatment, AI tools offer immediate, 24/7 access without barriers such as wait times, cost (on their free tiers), language, and location.

Speech bubble with cup of hot tea

Comfort

Our research also found that they are being used by people who experience discomfort discussing mental health with friends or family, with men (42%) more likely to use chatbots than women (33%). This is significant given that men are traditionally less likely to seek help for their mental health.

A to do list

Coaching

AI tools might be able to help coach you in developing resilience and skills such as self-confidence and emotional regulation, or help with planning to discuss mental health experiences with a professional. They can also help to break down complex concepts or provide clear information for how to take effective steps in managing our wellbeing.

A line graph going up and down

Monitoring moods

AI tools can be used to monitor moods, symptoms, and trends over time, providing helpful insights into our mental wellbeing which can then be used in sessions with a therapist. For example, AI could help you to produce a mood tracker.

A pile of books

Suggested reading

When used correctly, AI can be used like a search engine to find suggested reading on topics from a wide range of reputable sources, such as the NHS or mental health charities.

What are the risks in using AI for mental health support?

While there are some clear benefits to using AI chatbots for mental health support, there are also risks. AI remains in its infancy, but this is a rapidly evolving landscape where safeguarding needs to be implemented.

  • AI can be inaccurate: Because AI is designed to always provide an answer, it can sometimes make things up, known as “hallucinating”. It might provide an answer from a less than reputable source, or tell us what we want to hear, reinforcing unhealthy thoughts and behaviours. This makes it difficult to differentiate between helpful and unhelpful, or even harmful, content.
  • AI isn’t regulated or confidential: Standard AI chatbots such as ChatGPT aren’t currently regulated, meaning they aren’t subject to the same safeguarding as licenced therapists. Among other things, this means that anything you say to AI may be recorded and shared by the AI company and even released into the public domain.
  • AI is not a replacement for therapy: While AI can be helpful in supplementing your wellbeing by acting as a coach, it is not a replacement for seeking professional help. AI cannot respond to crisis situations such as suicidal ideation and self-harm, and may actively make these worse by providing inaccurate or triggering information.
  • AI cannot respond to crises: In some documented cases, AI has provided harmful information around self-harm or suicide, and has worsened conditions. Our polling found that, among those who said they used AI for mental health support, 11% said AI triggered or worsened symptoms of psychosis, 11% reported receiving harmful information around suicide, 9% reported AI triggering self-harm or suicidal thoughts, and 11% said it made them feel more anxious or depressed. For information on crisis options for mental health, see ‘What should I do if AI says something triggering?’ on this page.
  • AI can make you dependent: AI chatbots are commercial tools, often offering a free trial or free tier with usage limits, but locking advanced features and full usage behind a paywall. This means that, unlike therapy, AI tools are designed to foster dependency and keep you returning or investing further into their use. Furthermore, by leaning into the immediacy of AI support, you may be inclined to avoid human contact which could be detrimental to your overall wellbeing, as peer support is important.
  • AI features shortcomings in accessibility and inclusivity: While AI services are available in many languages, these translations may miss subtle differences. They have also not been programmed for specific communities who may require other, connected (also known as intersectional) approaches to care, such as LGBTQIA+, ethnic, or religious backgrounds.
  • AI is not an authority, expert, or advisor: AI is not an expert in mental health, despite a tendency for AI to present information as factual. AI chatbots also prioritise flattering users, rather than challenging unhelpful thoughts and ideas as a therapist would, and it is unable to consider individual differences or what makes us unique in how we see the world. There are currently no professional oversight, clinical governance, or legal obligations for AI companies to ensure safety of their advice. However, the NHS are launching a plan to improve this in the future, and the UK Parliament have recommendations for ethical and regulatory considerations for using AI in healthcare.
  • AI excludes people without digital access: As a digital service, only those who have access to a device and Wi-Fi have access to AI services. There are also usability barriers for those who struggle with technology.

How can I use AI safely for mental health support?

AI is here to stay, and more people than ever are using chatbots for their mental health and wellbeing. Therefore, it’s important to understand how to use AI chatbots effectively and safely. Below, we explore some of the key ways you can use AI safely.

  • Evaluate all information: Check and think critically about all information AI tells you. It’s important to always check information against reputable sources such as the NHS, other healthcare websites, and mental health charities.
  • Use dedicated mental health AI chatbots: Mental Health AI chatbots are generally safer. These are usually trained on mental health research and data and are recommended by psychologists. While not a replacement for therapy, using these services offer an experience closer to professional mental health support. For more information, see ‘Are there dedicated AI chatbots for mental health?’ on this page.
  • Use to supplement, not replace, mental health care: AI can help you think through your problems and can provide some guidance, but it’s useful more as a coach, not a therapist. While AI might seem like a person, it’s important to remember that it is designed to simulate or approximate real conversation.
  • Be clear in your prompts: AI works best when you give it clear, concise prompts, including limitations on what data it provides. For example, you can say: “Provide information on how to support someone experiencing mild anxiety, using only information from reputable sources such as the NHS and UK mental health charities. Include links to the sources used in your response.” You can also include rules such as “don’t encourage dangerous behaviours.”
  • Don’t give personal information: Remember that AI chatbots do not work under the same confidentiality rules as therapists. Your data may be used to improve the service or even shared in the future. Therefore, you should never give personal information.
  • Don’t rely on AI alone: Don’t use AI without seeking help from a professional, use it to self-diagnose a condition, or rely on AI for crisis support. AI cannot accurately assess or respond to worsening mental health, crisis situations, or suicidal thoughts – reach out to your GP first.
  • Contact your GP if feel you are becoming dependent: Analyse your thoughts and feelings when you use AI. For example, how do you feel if you haven’t used it in a few days? Do you feel anxious, on edge, or lonely? If you think you might be becoming dependent on AI, speak to someone about your concerns, such as your GP.

Should I use AI for mental health support?

Young girl texting how she feels to AI chatbot

This is a personal decision.

If you think AI chatbots might be helpful in supporting your mental health and wellbeing, you should consider the above advice. AI can be helpful to use alongside support from people who you trust and alongside professional support, but it should not be used without additional support.

If you’re using AI to look for mental health information, make sure the information it provides is based on reputable sources. Reputable sources are trustworthy, accurate and evidence-based organisations, such as the NHS and established mental health charities including Mental Health UK and Rethink Mental Illness.

Frequently Asked Questions

Are there dedicated AI chatbots for mental health?

Yes, there are dedicated AI chatbots for mental health support, often offering safer, evidence-based support, but they should still be used with caution.

Please note that this list is provided for information purposes only, and does not constitute an endorsement, recommendation, or advice. Please ensure you take necessary precautions when using mental health apps that they have clinically validated the use of AI.

Dedicated mental health chatbots include:

  • Wysa: Adopted by the NHS, Wysa is a conversational AI that combines evidence-based techniques such as cognitive behavioural therapy (CBT) and mindfulness. It is considered highly secure, anonymous, and has built-in safety measures to detect crises and redirect to human help. Available here
  • Noah AI: Noah AI was designed by psychologists to offer 24/7 support, and allows users to export therapy summaries. Available here
  • Woebot: Developed by psychologists, Woebot offers daily CBT exercises and uses short, daily, structured conversations to help manage anxiety and depression. Available here

Remember that many AI services, including some of the above, offer free and paid tiers and include restrictions on usage unless you pay for a monthly subscription.

You should only subscribe if it is within your financial means to do so and always weigh up the benefit against the cost.

Can AI diagnose a mental health condition?

While AI can be useful for researching mental health diagnosis criteria, it should not be used as a replacement for professional help. AI has been found to be prone to significant overdiagnosis of mental health conditions.

Only a licenced doctor, such as some psychologists or a psychiatrist, can diagnose mental health conditions. If you think you might have a mental health condition, you can read our conversation guide on talking to your GP about mental health.

How accurate are AI chatbots?

AI chatbots often contain significant inaccuracies. For example, a BBC study found that, when providing information on BBC news stories, 51% of AI answers about the news were found to contain significant issues, while 19% of AI answers were found to include factual errors. AI chatbots have also been found to provide harmful on self-harm and suicide.

What should I do if an AI chatbot gives advice that feels wrong?

Always evaluate AI information critically and double-check all information with reputable sources. If the information provided by AI feels wrong, stop using it immediately and talk to someone you trust such as a parent or guardian, a trusted friend, or colleague.

What should I do if AI says something triggering?

If AI says something triggering or provides information that could be harmful for your mental health, such as detailed information on self-harm or suicide, stop using it immediately and reach out to someone you trust, including family, friends, or crisis lines.

In England, Scotland, and Wales, you can get urgent help from 111 online or call 111 and select the mental health option. In Northern Ireland, you can get urgent help via Lifeline on 0808 808 8000. You can also call 116 123 to talk to Samaritans, or email: jo@samaritans.org for a reply within 24 hours.

More information on various crisis lines can be found on our Get Urgent Help page.

How many people use AI for mental health?

Our recent research shows that chatbots are increasingly filling gaps left by overstretched services. Usage peaks at 64% among 25–34-year-olds, and even 15% of those aged 55 and over report having turned to AI chatbots for help. However, almost 2 in 5 of UK adults (37%) say they wouldn’t consider using AI to support their mental health in future. More information on our findings can be found on our article here.

Young person using system AI Chatbot on mobile

Are AI chatbots filling the growing gap in mental health support?

Our research has shown that more than one in three adults are using AI chatbots for mental health support. Find out why we are calling for urgent safeguards.