This blog is adapted from our recent webinar covering the possibilities and challenges AI poses in the mental health space. Our Clinical Advisor, Dr Fiona Pienaar alongside Dr Melis Anatürk and Dr Emilia Piwek from our Data Insights team explained how AI is already being used at MHI to improve Shout and The Mix services.
The mental health sector currently faces unprecedented demand. With waiting lists growing and traditional services stretched thin, AI chatbots are increasingly being positioned as a solution to fill the gaps. They're available 24/7 to anyone with an internet connection, and can provide immediate responses when other services aren’t open, or don’t have enough people to oversee them.
But alongside the promise comes risk, as well as the challenge of scale. While chatbots can support large populations all at once, that same scale makes human oversight difficult.
While direct, unsupervised support from LLM’s may not yet be the answer to demand, at Mental Health Innovations our Data Insights team is successfully finding responsible applications for AI across our services to improve quality and helpfulness, while ensuring our limited resources as a charity are used effectively. From training simulators for volunteers to feedback analysis for quality assurance, we explain below how AI is enabling us to better meet the need for mental health support at scale.
What our services provide that AI chatbots currently can’t
As our Clinical Advisor, Fiona Pienaar, noted, "chatbots need to have the skill to de-escalate by grounding users in reality." The challenge is that many AI systems are built to be curious and exploratory. They want to dig deeper into whatever someone shares, which can mean reinforcing rather than redirecting harmful thoughts. They also often fall into what is called the "fawning trap", designed to validate and agree with users, telling them they’re right, to maintain engagement.
When someone contacts Shout or The Mix, they're speaking to a real person - a trained volunteer, counsellor, or peer supporter. As Fiona explains, “Research consistently shows that the quality of the therapeutic relationship - the connection between therapist and client - is one of the strongest predictors of successful outcomes.”
This human element provides several things that AI can’t replicate:
Genuine emotional validation: While AI can recognise emotional language, humans understand the emotions that aren't explicitly named but are inherent in the words people use. Our volunteers are trained to tune in authentically and reflect back what they're hearing in a supportive but honest way.
Clinical oversight and safeguarding: Every Shout conversation has trained clinicians monitoring conversations, able to intervene when needed. This human oversight ensures safeguarding protocols are followed and texters receive the right level of support they need, including those requiring an emergency intervention.
Real-world connection: Often, what people need most is help taking the next step towards speaking to someone in their life about how they're feeling and what they’re thinking. Our volunteers and resources help people develop the social skills and confidence to approach a doctor or other professional, family member, or friend for ongoing support.
Professional development: By training our volunteers, we're developing a professional mental health workforce with transferable skills that benefit wider society. In our recent annual volunteer survey, 86% of our volunteers felt that their training and volunteering experience had inspired them to transition into a career where they could help other people.
Our principles for responsible AI development
At MHI, we recognise that AI is here to stay and believe we have a responsibility to design and apply it in a way that’s both impactful and ethical. Our Data Insights team has developed a set of ethical principles that guide all their AI work, which you can read here. Below is an overview of three important guiding principles that shape every AI project we undertake.
Proportionality: We're mindful that AI won't always be the best solution. We prioritise high-impact, low-risk applications and always consider whether traditional data analytics approaches might be more appropriate.
Collaboration: This technology works best when shaped by people who use it or are affected by it. From the beginning, we involve frontline staff, clinicians, volunteers, and young people to ensure our tools meet their genuine needs.
Quality: We never deploy AI models without ongoing monitoring. We continuously gather feedback, refine our products, and ensure they remain fit for purpose over time.
How we use AI responsibly at MHI
There are a number of ways we have responsibly integrated AI into our work, including:
MixBot: The Mixbot is a chatbot available on The Mix website that helps visitors navigate the website and find relevant articles and resources. Unlike generative AI chatbots, our MixBot uses information from user messages to select the most relevant reply from a curated database of answers – providing answers to over 300 topics. To date, the Mixbot has supported over 13,500 people and answered more than 27,000 questions. As we’ve found that some users look for a distraction rather than an answer to any specific question – we are planning to integrate pathways that help distract or calm them in these moments, with evidence-based exercises.
Explore the MixBot at the bottom right corner of The Mix website.
Feedback analysis quality assurance: We use AI to analyse anonymous feedback from texters, helping us identify key themes and improve our services. Interestingly, one recurring theme has been how some conversations can feel "robotic", with this type of complaint having increased since the public release of ChatGPT in late-2022.
We’ve since developed an AI tool that flags volunteers whose conversations are systematically felt as robotic or scripted. Our coaches review these conversations to decide whether the volunteer may benefit from additional support from the team. This helps our volunteer support team give targeted feedback while reducing their workload and improving service quality.
Training simulation: We've also partnered with Slingshot AI to build a generative AI system (i.e., the Shout Conversation Simulator, SCS) fine-tuned on two million anonymised and aggregated Shout conversations. The SCS allows volunteers to practise the skills they learnt through training before taking their first conversation. The simulator presents with a wide range of scenarios that texters often bring to Shout, including grief, insomnia, or suicidal ideation. Since launching in our training program in 2023, over 7,100 trainees have used it, with 83% saying it helped them practise their skills.