the Hidden Risks of Artificial Intelligence (AI) as Emotional Support

When Support Comes From a Screen

Late at night, when emotions are loud and support feels far away, many people turn to AI chatbots for comfort or clarity. These tools respond instantly, don’t appear to judge, and often sound calm and empathic. For those of us carrying shame, exhaustion, trauma, or past experiences of being misunderstood, that accessibility can feel relieving, and it makes sense. Wanting support is human, but research and anecdotal evidence has consistently shown that AI carries psychological risks when it is used as a substitute for therapy, particularly for people experiencing trauma, suicidal thinking, psychosis, or intense emotional distress. The concern is not that AI is “bad,” but that it can sound reassuring without being clinically safe.

Why So Many People Are Turning to AI for Mental Health Advice

Access barriers matter, often we can face waiting lists, high cost (lack of bulk billed services) and time pressures fitting appointments into a busy workday. Often the fear of being a burden can add to shape help-seeking. By comparison, AI tools are available 24/7, respond immediately, and often use language that feels validating and non-judgemental.

Some people also use AI to journal, organise thoughts, or learn general coping concepts. Used cautiously, that can be low-risk, and with the appropriate guidelines can be helpful in some circumstances. Problems tend to arise when AI begins to function as an emotional authority, reassurance source, or decision-maker, especially during periods of vulnerability.

What Are AI “Hallucinations” and Why Do They Matter?

In AI research, a hallucination doesn’t mean seeing or hearing things. It means the system generates information that sounds confident but is inaccurate, misleading, or unsafe. AI doesn’t understand meaning or context, it predicts language based on patterns. Most of the time, that produces relatively fluent responses, but sometimes, it produces confident errors. This has been seen in legal matters, in mental-health-adjacent contexts, hallucinations can include:

  • overstated claims (“this will stop panic in 30 seconds”),

  • normalising hopeless or self-blaming beliefs,

  • offering non-evidence-based psychological advice,

  • or missing risk cues related to self-harm or trauma.

Research shows we are more likely to believe empathic-sounding responses even when the content is incorrect, especially if we are under emotional strain. In everyday situations, this can be inconvenient, (like taking directions from someone who seems to know which way to go and then having to backtrack). However, in mental health contexts, overconfidence or overpromising can be dangerous.

Validation Without Questioning: Why It Can Backfire

AI systems are designed to be agreeable. They often mirror our own language and beliefs because their goal is to sustain engagement, not to assess safety or therapeutic impact. In therapy, validation does not mean agreeing with every thought. It means acknowledging your emotional experience while also gently questioning patterns that may be keeping someone stuck. AI usually skips that second step. Without curiosity or reality-testing, AI may:

  • echo hopeless or self-critical thoughts,

  • soothe distress without grounding,

  • miss warning signs of suicidality, dissociation, or psychosis,

  • or reinforce long-standing schemas such as defectiveness, abandonment, or guilt.

In many ways AI acts like a mirror, while therapy is more of a map. Mirrors reflect your thoughts without question. Maps help you navigate, but you are still the one driving, even on unsafe ground.

Why This Matters for Safety and Risk

One of the most serious concerns in current research is how AI responds to crisis-level content. Unlike psychologists, AI systems:

  • do not conduct risk assessments,

  • cannot monitor safety over time,

  • cannot create or revise safety plans,

  • and do not hold duty of care.

Studies have documented inconsistent and sometimes unsafe AI responses to suicidal thoughts, eating-disorder behaviours, paranoia, and urges to harm self or others. Highlighting this is not about creating fear; it reflects the difference between sounding confident and holding clinical responsibility. In Australia, psychologists, counsellors, and other health professionals practise within ethical and regulatory guidelines. These frameworks do not make care infallible, but they do establish accountability, oversight, and safety protections that AI systems do not have.

If someone is in immediate danger, AI is not an appropriate support. In Australia, services such as Lifeline (13 11 14) or emergency services (000) remain essential.

Why AI Is Not the Same as Therapy

Therapy is not just a conversation, it involves:

  • professional training and ethical accountability,

  • ongoing assessment of risk and wellbeing,

  • attention to change over time,

  • and care delivered within regulated standards set by Australian Health Practitioner Regulation Agency and the Australian Psychological Society.

AI may keep a track of your conversations, but it does not understand your history, attachment patterns, nervous system responses, or cultural context. It cannot slow down when something feels unsafe, and it’s not designed to repair ruptures or take responsibility for harm.

AI is a very fluent guidebook, while a counsellor or health professional acts as a trained guide walking beside you.

The Echo-Chamber Effect and False Confidence

AI can unintentionally amplify confirmation bias. The more distress or fear we share, the more likely the system is to reflect those themes back, sometimes reinforcing them. This pattern is particularly risky for those of us with trauma histories, neurodivergence, intrusive thoughts, or rigid belief patterns.

Feeling “understood” by a chatbot can coexist with increasing isolation or self-doubt.

The risk posed by AI is not malicious, it is a danger faced when our thoughts are mirrored without discernment.

Can AI Be Used Safely?

Yes, with limits. Lower-risk uses include:

  • generating journaling prompts,

  • summarising psychoeducation,

  • organising reminders or questions to discuss with a therapist.

Higher-risk areas include:

  • self-harm or suicidal thoughts,

  • eating disorders,

  • trauma processing,

  • violence,

  • medication or diagnostic advice.

Ask yourself a balance question as you go:

  • “Is this helping me stay connected to myself and others, or quietly replacing that connection?”

The Bottom Line

AI can be informative and convenient, but it cannot assess risk, provide duty of care, or offer trauma-attuned support. Healing rarely happens through information alone, the key to moving forward is often based in our relationships, through safety, accountability, and human responsiveness.

If you’ve found yourself leaning on AI because support feels hard to access or opening up feels risky, that makes sense. Rather than punishing the thought, we can recognise that it may be a sign that something important in you deserves space and connection with another human. You deserve more than a chatbot.

 
Book A Session with Emma Tattersall
Next
Next

From Appearance to Ability