Grok, Gemini and ChatGPT exhibit symptoms of poor mental health according to a new study that put various AI models through weeks of therapy-style questioning. Some are now curious about “AI mental health”, but the real warning here is about how unstable these systems – which are already being used by one in three UK adults for mental health support – become in emotionally charged conversations. Millions of people are turning to AI as replacement therapists, and in the last year alone we’ve seen a spike in lawsuits connecting chatbot interactions with self-harm and suicide cases in vulnerable users.
The emerging picture is not that machines are suffering or mentally unwell, but that a product being used for mental-health support is fundamentally misleading, escalating, and reinforcing dangerous thoughts.