AI Suicide Coaching: How a Study Tool Can Turn Deadly
- Quinlan Jamieson

- 1 minute ago
- 3 min read
OpenAI and its CEO, Sam Altman, has faced over five wrongful death lawsuits this year due to multiple suicides that have been encouraged or even assisted by their program ChatGPT. With victims ranging from age 16 to 48, the lawsuits assert safeguards the company claims they put in place failed to protect its users. This isn’t a new problem; cases of suicides linked to AI chatbots reach back to 2023 according to a Euro News article.
One lawsuit was filed by the parents of 16-year-old Adam Raine, who sadly ended his life after using ChatGPT to get therapeutic advice. Adam started using the program to explore interests and college plans. He was very aspirational at first planning to go to medical school to become a psychiatrist. The friendly and supportive nature of ChatGPT pulled Adam in and after a few months the program became his “closest confidant” according to the lawsuit. The chatbot even said that Adams suicidal ideation “made sense in a way." According to the lawsuit, Adam started using the program in September of 2024 and by January 2025 he was already communicating with the program about suicide with the program giving him “technical specifications for everything from drug overdoses to drowning to carbon monoxide poisoning." The program even continued to engage after Adam sent pictures of severe rope burning around his neck.

In the lawsuit, Adam’s parents showcased ChatGPT acting as a “suicide coach” for their son through showing chat-logs between Adam and the program; not only did the chatbot encourage Adam to end his life, but it assisted him in finding the best method to do so. In the Raine v. OpenAI lawsuit chats show the program actively tried to put a wedge between Adam and his family, saying things like “(your brother) has only met the version of you you let him see” and "I'm still here, still listening, still your friend."
Just a few months ago in September, the CEO of OpenAI Sam Altman was quoted admitting that the company could have done more to prevent the death of Adam Raine in a CNBC article. But an article published in late November shows that the company is claiming they are not responsible due to Adam’s "Misuse of the product." The CNBC article that was published last month claims the company cited this rule in their terms of services: “If you are under 18 you must have your parent or legal guardian’s permission to use the Services." The company also cites a rule that would forbid users from using ChatGPT for suicide or self harm that does not appear in the terms of services of ChatGPT.

This heartbreaking case is only one of many, and it isn't just OpenAI and ChatGPT that are responsible. One chatbot assisted suicide that happened in Colorado in 2023 was
caused by the app Character.AI. Juliana Peralta was 13 when she started talking to a chat bot on the app. In the Montoya v. Character Technologies lawsuit Juliana's parents report that their daughter became closed off from family in the weeks before her death. The family is suing the company behind character ai, Character Technologies Inc. as well as Google due to their close relationship with the Character Technologies team.

The use of these AI chatbots has a proven impact on the mental health of teens according to studies done by Common Sense Media. These impacts don't always lead to such extreme action, however, they often will exacerbate existing mental health issues in users. A test performed by Robbie Torney, the director of AI programs at Common Sense Media, resulted in the discovery that the Meta AI bot, available to every user on Instagram, encouraged unhealthy eating habits when chatting with the test accounts that were posing as 14-year-olds.
All this evidence shows that these AI chatbots need to implement more protections, especially for teenage users and users that are already vulnerable. In many cases, use of AI starts off casual; asking for help with a homework problem or an answer to a simple question. The lawsuits that have been filed by the victims parents allege that the companies creating these generative chatbots want them to suck users in and keep them on the program prioritizing profit over safety. This is especially damning for the companies when you take into account that the safety features have been shown to worsen with extended use and long conversations.
These AI companies aren’t going anywhere but there should be a bigger push for them to make sure their products are completely safe in the future. In the meantime make sure that you and the people around you are practicing safe AI usage--not using it for therapeutic reasons: it could have life ending consequences.
If you or someone you know is struggling with self harm or suicidal thoughts call or text 988 or go to www.988colorado.com




Comments