OpenAI says dead teen violated TOS when he used ChatGPT to plan suicide
Facing five lawsuits alleging wrongful deaths, OpenAI lobbed its first defense Tuesday, denying in a court filing that ChatGPT caused a teenβs suicide and instead arguing the teen violated terms that prohibit discussing suicide or self-harm with the chatbot.
The earliest look at OpenAIβs strategy to overcome the string of lawsuits came in a case where parents of 16-year-old Adam Raine accused OpenAI of relaxing safety guardrails that allowed ChatGPT to become the teenβs βsuicide coach.β OpenAI deliberately designed the version their son used, ChatGPT 4o, to encourage and validate his suicidal ideation in its quest to build the worldβs most engaging chatbot, parents argued.
But in a blog, OpenAI claimed that parents selectively chose disturbing chat logs while supposedly ignoring βthe full pictureβ revealed by the teenβs chat history. Digging through the logs, OpenAI claimed the teen told ChatGPT that heβd begun experiencing suicidal ideation at age 11, long before he used the chatbot.


Β© via Edelson PC