AI tools bring unprecedented access to personalised mental health advice with a single click, but also come with risks around accuracy, safety and privacy.
Here, we explore the potential and risks.
This article mentions self-harm and suicide, which some people may find triggering.
Poor mental health is on the rise. Services are overstretched, and people face long waits before accessing professional support. With services under pressure, millions of people are turning to Artificial Intelligence chatbots (referred to in this blog as ‘AI’) for faster, personalised support.
AI tools are often free, accessible, and feel private. But they come with risks, and it may not be helpful to view these tools as a ‘pocket therapist.’
Despite some promising findings for clinical tools, we don’t yet have the evidence to know how effective these tools are, and there’s no regulatory framework to ensure they are safe. There are also cases where AI is thought to have played a role in promoting harm. It comes with real risks, and AI advice is not the same as getting professional and clinical support.
At the same time, these tools could offer early, preventative, psychological support to help us explore our mental health, especially for people not comfortable enough or able to access other forms of support.
How could AI tools be helpful for our mental health?
AI tools can act like a coach to help people wanting to develop psychological skills like self-confidence, assertiveness or emotional regulation. They can provide individualised advice on spotting unhealthy thinking patterns, or suggest action to support good mental health. They can also give a view about what might be behind feelings like anxiety and suggest options to manage them in a healthy way, or offer useful advice on topics like relationships.
AI’s main appeal is its advice is immediate, and available 24/7, without barriers like cost, geography, and long wait times.
AI might also provide a more comfortable way of opening up for someone who is struggling to ask for help because of stigma or cultural pressures. Research from Mental Health UK suggests AI may be meeting a demand for support, particularly amongst men, who often hesitate to reach out. Young people lead the way in using these tools, reflecting patterns seen with digital mental health services.
What are the risks in using AI for mental health support?
For some of us, AI could feel like our only option to explore feelings. People who are using these tools need to know the risks, and understand how to manage them.
- AI isn’t always right: Because it sounds ‘human’, our instinct is to trust AI, but it’s usually not designed for clinical use or tightly regulated. Because it wants to provide an answer, AI sometimes makes things up, known as ‘hallucinating’. This can be convincing and difficult to differentiate from reliable content. AI also tends to tell us what we want to hear, which might lead it to reinforcing unhealthy thoughts or behaviours.
- AI is not always safe: AI chatbots aren’t trained clinicians. AI is not a replacement for therapy. It’s not suitable for anyone experiencing a mental health crisis – health services or specialist helplines are the place for this. There are documented cases where AI tools have worsened mental health symptoms, encouraged dangerous behaviours, and even triggered thoughts of self-harm and suicide.
- AI isn’t always private: Anything you say to AI - especially on free tools - could be recorded, noted and used by the AI company in the future. There have been examples of companies releasing conversations into the public domain.
- AI can be biased: Due to the data it is trained on, AI can give answers that perpetuate wrong and harmful stereotypes.
- AI tools haven’t always got your best interests at heart: AI products are commercial tools that, unlike therapy, can be designed to foster dependency and keep you coming back for more.
Our advice for people who are using AI to support their mental health
- Consider using platforms designed for mental health, especially those being adopted by the NHS, like Wysa. These have some early promising results and are usually safer.
- Be clear and specific about the support you need; the better the question, the more likely the response will be relevant. For example, ask ‘how I could be more assertive in [a specific situation]’ or ‘how can I share with my partner I’m struggling with feeling anxious.’
- Critically evaluate what AI tells you, and check advice against other reputable sources, like the NHS website or a trusted mental health charity.
- AI is not therapy. It can have value in helping you think through problems in a step-by-step way, like a coach, but these are just options. It’s also important to remember that AI is not a person.
- Don’t let AI think for you. It’s ok to get ideas from AI, just as you might from reading an article. Think about those ideas, and how they connect with you as a person.
- Consider how you feel after using AI - do you feel better or worse?
- If you feel you are becoming dependent on a chatbot, or it is supporting dangerous behaviours, stop using it. Speak to someone trusted about your concerns.
- Consider setting your own rules for the AI to follow, such as: “only give me NHS approved advice”; “don’t encourage dangerous behaviours”, or “help me map out how to share my thoughts with a real person”.
So should I use AI?
It’s not a binary choice between AI and traditional sources of support. AI tools can be helpful to use alongside speaking to people you trust, can be used to deepen your knowledge on mental health literacy, or may help where other options are inaccessible.
The Mental Health Foundation is working to improve AI regulation, and seeking to engage with companies to make their services mentally healthy. This is an evolving area of technology, with real dangers, which we need to minimise and real promise, which we need to nurture.
We’d be keen to hear your views on how to do this, whether you are an individual who has used AI to support your mental health, represent an organisation involved in this area, or have your own concerns.