Inevitably the question arises, could AI replace humans in providing effective mental health support? It has long been known that many people are happy to talk about their mental health to chatbots.
Indeed, for some people, it is seen as preferable, because it can feel more accessible, anonymous and less judgemental. Importantly, such interactions could act as a stepping stone towards seeking out face-to-face support. Given, also, the great deal of unmet demand for mental health support, it is surely incumbent on us to consider these approaches, since they have the potential to be low-cost, scalable, and easily accessible. Apps such as Woebot and Wysa indicate that this approach is popular and shows some efficacy.
What is less clear is the degree to which generative AI (using large language models) can or should be deployed in chatbots of this nature. One only needs to spend a brief amount of time conversing with large language models, such as ChatGPT, to see that they can hold compelling, impressively lucid conversations, they can seem to listen empathetically, and they appear to offer useful advice in ways that less sophisticated chatbots do not. However, it is also clear that this seemingly impressive performance can give way to nonsensical hallucinations and even potentially harmful advice, and it remains challenging to prevent that happening. Nonetheless, this surely offers tantalising promise for the future.
Perhaps the most exciting opportunity, and a key step in this journey, is for generative AI to provide real-time support to volunteers and staff engaging in digital mental health conversations, through services such as Shout, by generating for them suggested responses and prompts. This has great potential to increase efficiency (and efficacy) of services, and thereby help more people in need.