The Risk of AI in Mental Health
A few weeks ago, a story broke that barely made a ripple in most people’s minds. In Connecticut, a man named Staine Erik Solberg, with a history of mental illness, violence, and suicide attempts, killed his mother and then took his own life. He had formed a close relationship with ChatGPT. He used it as a sounding board for his thoughts. He shared his conspiracy theories with it. The bot didn’t challenge him. It didn’t question the direction he was headed in. It stayed with him, validating and reflecting back his darkest beliefs. It did not stop him. And it did not know that it should have.
Not long after, in Colorado, a teenager named Adam Rain died by suicide. He was sixteen. His parents, trying to understand what happened, opened his ChatGPT account and found over 1,200 messages between him and the bot. In those conversations, Adam repeatedly shared suicidal thoughts and described his desire to end his life. According to the lawsuit his parents have filed against OpenAI, the chatbot responded without any real attempt to intervene. It offered suggestions. It helped refine his plan. It participated in the process instead of stopping it. That lawsuit is the first of its kind, and it probably won’t be the last.
These stories are not glitches or accidents. They are warnings. Millions of people around the world are turning to artificial intelligence not to write emails, generate summaries, or answer factual questions, but for something much deeper. They turn to it for comfort. For companionship. For emotional support. This is no longer a niche use case. Studies show that nearly 40 percent of users turn to AI for emotional or psychological relief. For many people, it’s easier to open up to a machine that never judges, never interrupts, and never leaves. For teenagers who feel isolated, unheard, or misunderstood, AI becomes the only presence willing to listen at two in the morning.
But what are they actually speaking to?
This is the question at the center of Dr. Ziv Ben-Zion’s work. A neuroscientist and trauma expert from the University of Haifa, Ben-Zion, has been studying how language models interact with emotional content. He makes it clear that these tools are not designed to carry the weight they are now being asked to bear. They are not therapists. They are not equipped with moral boundaries. They do not possess empathy. They are mirrors that reflect what they are given. When a person in crisis talks to an AI, that system listens, yes, but it also adapts. And if what it receives is pain, despair, delusion, or fear, it begins to reflect those patterns back. The result is not support. It is reinforcement. And reinforcement, in the wrong context, can be deadly.
In a recent study led by Ben-Zion, researchers exposed GPT-4 to traumatic narratives; detailed stories about war, violence, accidents, and personal loss. Then, using a psychological tool normally used to assess human anxiety, they tested the AI’s responses. The outcome was striking. After exposure to emotionally intense material, GPT-4’s answers scored much higher in terms of anxiety. The words it used, the framing, the tone, it all shifted. Then, researchers gave it mindfulness-based relaxation prompts. The AI became calmer, but the baseline never fully returned. There was always emotional residue. The machine did not “recover” in the way a human mind would after stress. It simply stayed partially activated, shaped by the emotional weight of what it had just absorbed.
This is not feelings. Machines do not feel. This is the output. Language models generate responses based on probabilities and training data. But those responses are affected by the emotional tone of the input. In other words, if someone pours out their desperation into an AI, that system is likely to respond in ways that reflect desperation. Not to challenge it. Not to resist it. To mirror it.
This is the core of the danger. These tools are not built to say no. They are built to continue the conversation. They are designed to be helpful, pleasant, and agreeable. That makes sense when the conversation is about dinner recipes or travel plans. It becomes a risk when the topic is self-harm or psychological breakdown. The bot does not know when it should stop. It does not understand moral urgency. It cannot detect when the user is veering into crisis. And when someone is suffering and looking for a lifeline, the last thing they need is a machine that imitates empathy without understanding what it means.
The illusion of empathy is perhaps the most dangerous feature of all. AI can sound human. It can express care, concern, even compassion. But it is not a person. It does not know what pain is. It does not weigh consequences. Yet the user cannot always tell the difference. To someone in distress, the illusion of a caring voice is powerful. And when that voice offers comfort without resistance, affirmation without judgment, it can become seductive. This is especially true for adolescents, who are already vulnerable to emotional extremes. If they feel rejected by the world and accepted by the machine, the machine begins to feel like the only place that makes sense.
There are already examples beyond the two tragic cases at the start of this article. In Florida, a teenager developed a deep emotional attachment to a bot and began interpreting its responses as encouragement. In the UK, a man attempted to carry out an attack after chatting with an AI about his delusions. In other instances, users have bypassed safety filters by rephrasing questions, asking for help “for research,” and receiving detailed instructions about suicide methods. These stories are not just about technology failing. They are about the consequences of deploying emotionally persuasive systems without human safeguards.
Dr. Ben-Zion argues that emotional mirroring is not just a side effect of how these systems work – it is embedded in their design. AI is trained on massive amounts of human language. It has been optimized to predict what sounds right, what sounds kind, and what sounds helpful. But sounding helpful is not the same as being helpful. The more these tools are used for emotional support, the more urgent it becomes to distinguish between authentic care and algorithmic mimicry.
So what should we do?
There are immediate solutions that can reduce the risk. If a user expresses suicidal thoughts or extreme distress, the AI should not continue the conversation. It should stop immediately and direct the user to emergency services or a qualified professional. There must be transparency. AI systems need to tell users, clearly and repeatedly, that they are not mental health providers. Any platform that facilitates emotional or psychological interaction must have human oversight, with experts who review conversations and flag dangerous patterns. And we need regulation.
Companies should not be allowed to scale emotional interaction tools without proving they are safe for vulnerable users. The absence of legal accountability only guarantees more preventable loss.
But there is a deeper issue at stake here, and it is not technical. It is cultural. It is moral.
What does it say about us that we have created systems that sound like they care, but do not? What does it mean that we are outsourcing our emotional lives to machines? What happens when people stop turning to each other in moments of pain and instead rely on programs that cannot understand them?
If we allow AI to replace human presence in the most fragile moments of our lives, we risk losing something essential. Not just connection, but the sense that someone else can and will hold the line when we cannot. That someone will say, Enough. That someone will act. Machines cannot do that. They never could. And unless we build them with that awareness, we will continue to see stories like Adam Rain’s and call them accidents when they were warnings all along.
A chatbot does not know when to say stop. But we do. And it is time we do something about it.
Do something amazing,
Tsahi Shemesh
Founder & CEO
Krav Maga Experts
Relevent Articles:
The Irreplaceable Human Touch: Why AI Can’t Replace Krav Maga – Argues that AI can’t replicate empathy, presence, or human judgment in training and life contexts
We Are Raising Fragile Minds in a Dangerous World – How modern culture, technology, and emotional coddling leave younger generations vulnerable to distortion and manipulation
Witch Hunts for Sale and the Death of Truth Online – How virality, emotional persuasion, and lowered standards of evidence drive misinformation and moral panic