Evolving landscape of AI in mental health support
Poor mental health is on the rise, with the cost-of-living crisis, long wait lists to access support, and media overload contributing to a worsening issue for the United Kingdom. With the introduction of Artificial Intelligence (AI) chatbots such as ChatGPT, recent findings by Mental Health UK show that more than one in three adults (37%) in the UK are turning to AI chatbots to support their mental health or wellbeing.
Generative AI models, also known as large language models or LLMs, are increasingly being used in all walks of life, including UK healthcare. For example, the 10 Year Health Plan for England aims to “make the NHS the most AI-enabled care system in the world” with AI already being used for low-risk tasks such as administration. However, this technology is still early in its development and isn’t flawless.
The term “artificial intelligence” is misleading, as they are not truly “intelligent” or capable of independent thought; generative AI models work by a complex series of mathematical calculations, predicting the most likely word to occur next in a sentence. They are trained on large amounts of data, and because of this, they can be misleading, inaccurate, or even reinforce stigmas or biases based on the data used to train them.
This means that, for complex and sensitive subjects such as mental health, AI chatbots can be dangerous tools capable of misinformation or direct harm.
However, there are some potential benefits to AI usage in mental health care, provided you keep certain things in mind.