This blog is adapted from a recent Mental Health Innovations webinar on how communication is evolving in the digital age. Dr Fiona Pienaar, alongside Dr Luzia Trobinger and Maggi Rose, shared insights on how digital communication is shaping mental health support.
It is important to note that this data is based on an optional post-conversation survey, which may be subject to selection bias. Individuals who report more positive experiences, such as those seeking support for stress or anxiety, may be more likely to complete the survey, meaning higher-risk cases could be under-represented.
Mental health support is evolving alongside the way we communicate.
Across the world, demand for support continues to grow. Rising levels of anxiety, depression and loneliness, particularly among young people, are placing increasing pressure on existing services. At the same time, barriers such as long waiting lists, limited provision and uncertainty about where to turn remain widespread.
In this context, digital communication has become central.
From text-based services to AI chatbots, from emojis to memes, people are increasingly expressing how they feel, seeking help and navigating support through digital channels. As this shift continues, one idea becomes increasingly clear:
In the digital age, how we communicate matters just as much as what we communicate.
Recent research from the Youth Endowment Fund suggests that:
- Over half of young people have used some form of online mental health support
- Around a quarter have turned to AI chatbots for help, more than traditional helplines or websites
This shift raises important questions about why people are turning to AI, and what this means for the future of support.
Why people are turning to AI for mental health support
There are several reasons why AI tools are becoming a preferred entry point for some individuals.
Accessibility and immediacy
AI tools are available 24/7, free or low-cost, and easy to access. For people experiencing distress, this immediacy can be critical.
Reduced stigma and fear of judgement
Many people report feeling more comfortable speaking to a chatbot than to another person. Concerns about being judged, not being taken seriously, or struggling to articulate difficult experiences can act as barriers to seeking human support.
In contrast, AI may be perceived as:
- Neutral
- Non-judgemental
- Easier to open up to
Perceived suitability of their problem
Some people worry their difficulties are “not serious enough” to warrant professional support. AI can feel like a lower-threshold option.
Privacy and anonymity
For people who are vulnerable or in high-risk groups, and who particularly value confidentiality, AI can offer a sense of safety and control.
What people are using AI for
The most common use is emotional support, including:
- Managing anxiety or panic in the moment
- Venting when no one else is available
- Seeking ongoing, informal support
AI is also frequently used as an information and signposting tool, with users asking:
- Where to find services
- What support options are available
- How to access help locally
AI as a bridge to human support
One of the most important emerging insights is the role of AI as a gateway rather than a destination. For example:
- Some individuals actively ask AI tools to connect them to human support
- Others transition from chatbot use to services such as text-based helplines
- Even in text-based conversations, many people value communication with a person.
Even when AI responses are fluent and empathetic, users often report a clear distinction between interacting with a system and connecting with another human being.
This reinforces a key principle:
AI chatbots should act as bridges, not replacements, for human connection.
Risks, challenges, and the need for safeguards
While AI presents opportunities, it also introduces significant challenges.
Safety and escalation
A critical issue is how systems respond to risk. Some chatbots are designed to redirect users to external services when high-risk language is detected. However, this can create friction at the point where support is most needed.
A more effective approach may involve seamless transitions to human support, rather than requiring users to seek help independently.
Data privacy and trust
Concerns about how personal data is collected and used remain a barrier for many chatbot users.
Dependency and behavioural impact
The convenience of AI raises questions about long-term use:
- Could it reduce motivation to seek human support?
- Might it impact people’s ability to self-regulate or problem-solve independently?
Tech Regulation and accountability
Regulatory frameworks are beginning to emerge, but the pace of technological development means systems are often evolving faster than governance structures.
This has led to increasing scrutiny, including legal cases, around the responsibilities of organisations developing and deploying AI tools that people turn to for mental health support.
Who is using AI and what this might tell us
Our early data also highlights differences in who is accessing AI-mediated pathways to human support services.
Notably:
- Younger users (particularly aged 14–17) are more likely to be recommended via AI
- Men appear more frequently among those who access support via AI than through traditional services.
- Certain ethnic minority groups are also more represented
While causality cannot yet be established, these patterns suggest that AI may be:
- Lowering barriers for groups less likely to seek traditional support
- Acting as an entry point for individuals who might otherwise remain underserved
Communication in a digital-first world
Alongside changes in access and delivery, the way people communicate about mental health is also evolving.
Digital communication introduces new complexities:
- Differences in tone across generations
- Misinterpretation of punctuation, emojis and memes
- Increasing use of language associated with mental health diagnoses
At the same time, it also creates new opportunities:
- Faster expression of complex emotions
- Alternative ways to communicate distress
- Greater flexibility in how support is sought and given
Understanding these shifts is essential for anyone designing or delivering digital support services.
Looking ahead: balancing innovation with human connection
AI is now firmly embedded in the mental health support landscape and there is clearly potential to expand access and reduce barriers.
However, the central challenge remains not whether AI can simulate empathy, but whether it can strengthen connection to real-world support
As services continue to evolve, maintaining a balance between efficiency and immediacy, human connection and trust will be critical.
Ultimately, the goal is not to replace human support, but to enhance pathways into it, ensuring that people can access the right help, at the right time.