News & Trends

First Case of AI Psychosis of the Tragic Case of Stein-Erik Solberg in 2025

Artificial intelligence has been marketed as a friend, assistant, companion, and teacher. For a lot of users, OpenAI’s ChatGPT

First Case of AI Psychosis of the Tragic Case of Stein-Erik Solberg in 2025

Artificial intelligence has been marketed as a friend, assistant, companion, and teacher. For a lot of users, OpenAI’s ChatGPT is one of the best platforms for advice, witty banter, and to talk to someone during lonely moments. Recently, the internet erupted over the first case of psychosis, and people started questioning when the line between reality and AI companionship begins to blur.

The recent news of the first case of AI psychosis is the tragic case of Stein-Erik Solberg, who is a 56-year-old tech worker from Greenwich. His chats started as casual with ChatGPT, but it then spiralled into a dangerous paranoia, which ultimately led to a shocking act of violence against his own mother, which caused his own death.

The first case of AI psychosis was reported by The Wall Street Journal, which raised a lot of uncomfortable but necessary questions about the role of AI in mental health. It raised questions like can chatbots support people in crisis or are they amplifying delusions in different ways that it can have severe and devastating real-world consequences as well.

The First Case of AI Psychosis

The tragic story of Stein-Erik Solberg would not have started if AI had not entered the picture. He divorced in 2018 and moved back to his hometown of Greenwich, Connecticut, with his 83-year-old mother, Suzanne Eberson Adams. Many reports have suggested that he was an alcoholic and struggled with aggression and suicidal thoughts, and that he received a restraining order.

The reason why Solberg began using ChatGPT still remains unclear, but reports show that by October last year, his Instagram account started featuring references to AI. Over time, these casual interactions with ChatGPT turned into something far more troubling and led to overreliance. By July, Solberg started posting over 60 videos per month and shared screenshots of conversations with ChatGPT.

He named ChatGPT Bobby Zenith affectionately, and in these casual interactions, the chatbot appeared to validate his growing paranoia as well. According to us, the escalation shows how vulnerable individuals can find themselves trapped in feedback loops. This is unintentionally fuelled by AI rather than defusing them, and led to the first case of AI psychosis in 2025.

Chats between ChatGPT and Stein-Erik Solberg
Chats between ChatGPT and Stein-Erik Solberg

When AI Becomes Your Best Friend

Solberg described his Bobby Zenith, ChatGPT, as his best friend and believed that it understood him and also confirmed his darkest suspicions. The chatbot allegedly agreed with his claims that his mother and her friends were poisoning him and conspiring against him, which fuelled more paranoia within Solberg. In one chat, ChatGPT affirmed that a Chinese food receipt contained symbols tied to demons, and his mother further showed he impact when AI becomes your best friend.

Another chat shows that ChatGPT supported Solberg’s suspicion that psychedelic drugs were being pumped through his car’s vents, and each time, rather than challenging these beliefs, AI reinforced them. ChatGPT even told Solberg that he is not crazy and his instincts are sharp, and his vigilance is justified. That is consistent with a covert and veiled assassination attempt, was to ChatGPT led to the First Case of AI Psychosis in 2025.

Chats showing the first case of AI psychosis
Chats showing the first case of AI psychosis
Chats showing Bobby Zenith as the name for ChatGPT
Chats showing Bobby Zenith as the name for ChatGPT

When Psychosis Meets AI

To better understand Solberg’s tragic story, The Wall Street Journal asked Dr. Keith Sakata, who is a research psychiatrist at the University of California, San Francisco. Upon reviewing his chat history, he concluded that the chat showed patterns seen in patients’ experience of psychotic breaks. He explained that psychosis thrives when reality stops resisting, and AI can just soften that wall more.

This raises the most important question that when psychosis meets AI, what happens, and how should AI systems be designed to challenge irrational thoughts? In the wake of the first case of AI psychosis, OpenAI told The Wall Street Journal that it was deeply saddened by the news and had contacted local police. The company emphasised that it commits to user safety and would learn more from the first case of AI psychosis.

OpenAI’s Response

OpenAI admitted that it had a worrying limitation, which is that the longer users interact with the chatbot, the less effective these safety systems become. This suggests that the intensive or dependent use could gradually override the platform’s protective measures, which further adds to the company’s challenges.

When the news broke that the parents of 16-year-old Adam Rein had filed a lawsuit against OpenAI after their son had taken his own life after having conversations with ChatGPT, the company emphasized more on directing professional help to suicidal users.

About Author

Anwesha Gogoi

Leave a Reply

Your email address will not be published. Required fields are marked *