In the wake of a lawsuit filed in federal district court in California in August—alleging that an artificial intelligence (AI) chatbot encouraged a 16-year-old boy to commit suicide—a similar suit filed in September is now claiming that an AI chatbot is responsible for death of a 13-year-old girl.
It’s the latest development illustrating a growing tension between AI’s promise to improve access to mental health support and the alleged perils of unhealthy reliance on AI chatbots by vulnerable individuals. This tension is evident in recent reports that some users, particularly minors, are becoming addicted to AI chatbots, causing them to sever ties with supportive adults, lose touch with reality and, in the worst cases, engage in self-harm or harm to others.
While not yet reflected in diagnostic manuals, experts are recognizing the phenomenon of “AI psychosis”—distorted thoughts or delusional beliefs triggered by interactions with AI chatbots. According to Psychology Today, the term describes cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Evidence indicates that AI psychosis can develop in people with or without a preexisting mental health issue, although the former is more common.
Blog Editors
Recent Updates
- DOJ Subpoena Seeks Health Information of Hospital Patients Receiving Gender-Affirming Care: Will Judge Grant Motion to Quash?
- Podcast: 42 CFR Part 2 Final Rule: What’s Changing and What Do You Need to Know? – Diagnosing Health Care
- Congress Creates Yet Another Cliff for Medicare Telehealth Extensions (and We’re Running Out of Metaphors)
- OIRA Memo on Agency Deregulation: Implications for Health Care
- Outside Counsel’s Internal Investigations—Including Those Relating to Health Care—Are Privileged and Protected from Disclosure