Skip to main content

We use necessary cookies that allow our site to work. We also set optional cookies that help us improve our website. For more information about the types of cookies we use, visit our Cookies policy.

Cookie settings

This blog is adapted from a recent Mental Health Innovations webinar on how communication is evolving in the digital age. Dr Fiona Pienaar, alongside Dr Luzia Trobinger and Maggi Rose, shared insights on how digital communication is shaping mental health support.

It is important to note that this data is based on an optional post-conversation survey, which may be subject to selection bias. Individuals who report more positive experiences, such as those seeking support for stress or anxiety, may be more likely to complete the survey, meaning higher-risk cases could be under-represented.

Mental health support is evolving alongside the way we communicate.

Across the world, demand for support continues to grow. Rising levels of anxiety, depression and loneliness, particularly among young people, are placing increasing pressure on existing services. At the same time, barriers such as long waiting lists, limited provision and uncertainty about where to turn remain widespread.

In this context, digital communication has become central.

pexels-karolina-grabowska-6958585.jpg

From text-based services to AI chatbots, from emojis to memes, people are increasingly expressing how they feel, seeking help and navigating support through digital channels. As this shift continues, one idea becomes increasingly clear:

In the digital age, how we communicate matters just as much as what we communicate.

Recent research from the Youth Endowment Fund suggests that:

This shift raises important questions about why people are turning to AI, and what this means for the future of support.

Why people are turning to AI for mental health support

There are several reasons why AI tools are becoming a preferred entry point for some individuals.

Accessibility and immediacy

AI tools are available 24/7, free or low-cost, and easy to access. For people experiencing distress, this immediacy can be critical.

Reduced stigma and fear of judgement

Many people report feeling more comfortable speaking to a chatbot than to another person. Concerns about being judged, not being taken seriously, or struggling to articulate difficult experiences can act as barriers to seeking human support.

In contrast, AI may be perceived as:

Perceived suitability of their problem

Some people worry their difficulties are “not serious enough” to warrant professional support. AI can feel like a lower-threshold option.

Privacy and anonymity

For people who are vulnerable or in high-risk groups, and who particularly value confidentiality, AI can offer a sense of safety and control.

What people are using AI for

The most common use is emotional support, including:

AI is also frequently used as an information and signposting tool, with users asking:

AI as a bridge to human support

One of the most important emerging insights is the role of AI as a gateway rather than a destination. For example:

Even when AI responses are fluent and empathetic, users often report a clear distinction between interacting with a system and connecting with another human being.

This reinforces a key principle:

AI chatbots should act as bridges, not replacements, for human connection.

Risks, challenges, and the need for safeguards

While AI presents opportunities, it also introduces significant challenges.

Safety and escalation

A critical issue is how systems respond to risk. Some chatbots are designed to redirect users to external services when high-risk language is detected. However, this can create friction at the point where support is most needed.

A more effective approach may involve seamless transitions to human support, rather than requiring users to seek help independently.

Data privacy and trust

Concerns about how personal data is collected and used remain a barrier for many chatbot users.

Dependency and behavioural impact

The convenience of AI raises questions about long-term use:

Tech Regulation and accountability

Regulatory frameworks are beginning to emerge, but the pace of technological development means systems are often evolving faster than governance structures.

This has led to increasing scrutiny, including legal cases, around the responsibilities of organisations developing and deploying AI tools that people turn to for mental health support.

Who is using AI and what this might tell us

Our early data also highlights differences in who is accessing AI-mediated pathways to human support services.

Notably:

While causality cannot yet be established, these patterns suggest that AI may be:

Communication in a digital-first world

Alongside changes in access and delivery, the way people communicate about mental health is also evolving.

Digital communication introduces new complexities:

At the same time, it also creates new opportunities:

Understanding these shifts is essential for anyone designing or delivering digital support services.

Looking ahead: balancing innovation with human connection

AI is now firmly embedded in the mental health support landscape and there is clearly potential to expand access and reduce barriers.

However, the central challenge remains not whether AI can simulate empathy, but whether it can strengthen connection to real-world support

As services continue to evolve, maintaining a balance between efficiency and immediacy, human connection and trust will be critical.

Ultimately, the goal is not to replace human support, but to enhance pathways into it, ensuring that people can access the right help, at the right time.