New Whitepaper: The Hidden Truths of Trust & Safety Frontliners reveals the cost of protecting online spacesDownload ->
Written By
Protik Roychowdhury

Using AI therapy? Beware the echo chamber effect

Get the mental health support your company needs

Table of Content

Table of Contents

Since artificial inteligence (AI) language models exploded onto the scene, people have begun treating chatbots like therapists. We get the appeal of AI therapy: Why pay a professional and leave the house when you can vent for free on your couch? But here’s the catch.

Talk about incorporating AI in healthcare and the usual buzzwords are speed, scale and efficiency. But mental health is a different ball game. Unlike sectors like e-commerce or food delivery, it touches the most sensitive parts of a person’s life: their relationships, their trauma history, their emotional resilience. Here, AI therapy gone wrong isn’t just an inconvenience. It can be seriously harmful.

Why AI therapy can be risky

Misdiagnosis

Unlike a wrong product recommendation or a late food delivery, the wrong response doesn’t just cause frustration to a vulnerable user. It can cut very deep to their identity and self-worth, leaving them feeling even more distressed than before.

Consider a coaching client of mine. When she told an AI platform she felt “overwhelmed,” the chatbot labeled her with “anxiety.” But it missed the bigger picture: her perfectionistic tendencies and the unique pressures she faced at work, which should have been considered before conclusions were made. A couple of weeks later, she internalised this response and started seeing herself as “an anxious person.”

Unfortunately, this is how most AI platforms operate. Generic symptom checkers and chatbot scripts can overlook subtle cues in language, emotion and context that are critical to providing the right care. This is somewhat expected, since they’re not designed specifically for mental health. That is why AI therapy must follow a different playbook; one that is centered on trust, safety, and ethics. 

Unchecked bias

When we rely on AI therapy, the risks go far beyond simple misinformation. Unchecked bias, which can distort reality by reinforcing one’s beliefs rather than challenging them, is equally precarious.

This issue is not new. Social media platforms have long demonstrated how algorithms pander to emotions like anger, insecurity, and isolation instead of promoting balance. Netflix documentaries like The Social Dilemma and, more recently, Adolescence, illustrate this perfectly.

Now, imagine how this plays out when we turn to AI therapy. The echo chamber effect is practically on steroids. We’re talking about the reinforcement of toxic self-narratives, feelings of helplessness, or even a false sense of confidence—each of which can further deepen isolation.

Unchecked bias doesn’t just affect individuals. Left unchecked, it can reshape entire societies. Just look at the impact social media has had, both for better and for worse. This is why mental health companies must design their AI systems with this reality in mind from day one.

“But AI makes me feel seen and heard.” 

To understand why unchecked bias can be problematic in mental healthcare, we must acknowledge this: A therapist who spends most sessions nodding along in agreement may be validating, but they’re not necessarily being effective. 

On my journey to becoming a behavioural health coach, one of the most important lessons I learned is that real support isn’t about affirming everything someone says. It’s about helping them uncover subconscious patterns and challenge long-held beliefs, thereby opening up space for growth.

One of my favourite techniques, for instance, is simply asking clients: “Is that necessarily true?” This small but powerful question encourages them to examine their assumptions, rather than just having them reflected back without a shred of critical thought. Yet, that’s what generic AI chatbots do most of the time. 

Rules for responsible AI therapy

AI therapy cannot just be a friendly mirror. It needs to be an ethical guide that nurtures and challenges users judiciously. As such, it cannot be built for engagement, it needs to be built for outcomes. 

Here are five principles we believe should guide the development of AI mental health tools, and that shape how we approach AI at Intellect.

  • Contextual awareness: AI therapy should be able to distinguish fleeting emotions from persistent patterns and adapt their response accordingly. When a statement like “I’m feeling anxious about my presentation tomorrow” is misconstrued as “I’ve been experiencing anxiety daily for months,” the response generated can do more harm than good to users, like it did my client.

  • Challenging, not just validating: AI therapy should validate emotions while challenging unhelpful patterns of thought and behaviour. For instance, a response like “It’s natural to feel this way AND I wonder if there’s another way to look at this situation?” shows empathy while modelling cognitive flexibility. 

  • Clear handoff protocols: AI therapy should not replace licensed mental health professionals, and must incorporate clear triggers for escalation to human support. These should be based on severity indicators, such as suicidal ideation or signs of abuse, as well as patterns that suggest worsening mental states, like a user becoming increasingly stuck in negative thinking despite interventions.

  • Cultural competence: AI therapy need to be built with cultural context at its core. A Southeast Asian user might say, “My heart feels heavy”—a common expression of distress that a Western-trained model could easily miss. As emotional vocabulary varies across cultures, developers must include diverse training data and perspectives or risk misinterpreting or overlooking critical signs. 

  • Transparency about limitations: Last but not least, users of AI therapy should always know they’re speaking with a chatbot, understand its limitations, and be directed to licensed mental health professionals where necessary.

Efficient and ethical AI therapy

These five principles may seem demanding, especially for teams focused on rapid development and market reach. However, they don’t have to slow progress. In fact, they can often accelerate long-term success. There’s a common myth that ethics hinders innovation, but that doesn’t have to be the case.

When we design for transparency, fairness, and human dignity from the start, we unlock deeper engagement, greater trust, and—most importantly—better outcomes. Users are more willing to open up. Clinicians are more willing to collaborate. Organisations are more willing to integrate. 

As we build the future of mental health care, we have a choice: we can use AI to automate services, farm engagement and simply echo users’ thoughts, or we can use it to build trust, strengthen human connection, and offer real hope to those in search of it. 

Written by

A healthy company is a happy company

Employees need mental wellbeing support now more than ever. With Intellect, you can give them access to the Mental healthcare they need, when they need it.

YOU MIGHT ALSO LIKE