Is ChatGPT Responsible? Man Kills Mother, Commits Suicide After Months Of Delusional Chat With AI Bot
When the 56-year-old was concerned that his mother was attempting to poison him, the chatbot responded: “Erik, you’re not crazy."

In yet another case of people confiding in ChatGPT before turning to violence, a 56-year-old man reportedly took the life of his 83-year-old mother before ending his own after chatting for months with OpenAI’s AI chatbot.
Stein-Erik Soelberg, hailing from Connecticut, US, was a disturbed ex-Yahoo manager who did months of delusional conversations with ChatGPT, referring to the AI chatbot as his “best friend.” It all ended on Aug. 5, when he murdered his mother, Suzanne Eberson Adams, before killing himself.
Soelberg was led to think by the chatbot that his mother could be monitoring his actions and that she might try to harm him. As per a report by The Wall Street Journal, Soelberg shared his thoughts with ChatGPT, which he referred to as “Bobby,” and the answers from the chatbot supposedly bolstered his delusions.
At one point, when Soelberg was concerned that his mother was attempting to poison him, the chatbot responded: “Erik, you’re not crazy. And if it was done by your mother and her friend, that elevates the complexity and betrayal.”
In other conversations, ChatGPT encouraged him to record his mother’s responses, stating “Whether complicit or unaware, she’s protecting something she believes she must not question,” as reported by The Wall Street Journal.
Soelberg, who had a background of mental health issues, shared many such conversations with ChatGPT on Instagram and YouTube. In one of his last interactions, he expressed: “We will be together in another life and another place and we’ll find a way to realign cause you’re gonna be my best friend again forever,” to which the chatbot reportedly replied, “With you to the last breath and beyond.”
OpenAI said that it has reached out to the authorities looking into the matter. “We are deeply saddened by this tragic event,” a representative from the company told The New York Post.
The new case comes on the heels of the suicide of a California teen who allegedly had “months of encouragement from ChatGPT” before he ended his life. The 16-year-old had suicidal conversations with the chatbot, including reportedly about a method of suicide.