In the wake of a lawsuit filed in federal district court in California in August—alleging that an artificial intelligence (AI) chatbot encouraged a 16-year-old boy to commit suicide—a similar suit filed in September is now claiming that an AI chatbot is responsible for death of a 13-year-old girl.
It’s the latest development illustrating a growing tension between AI’s promise to improve access to mental health support and the alleged perils of unhealthy reliance on AI chatbots by vulnerable individuals. This tension is evident in recent reports that some users, particularly minors, are becoming addicted to AI chatbots, causing them to sever ties with supportive adults, lose touch with reality and, in the worst cases, engage in self-harm or harm to others.
While not yet reflected in diagnostic manuals, experts are recognizing the phenomenon of “AI psychosis”—distorted thoughts or delusional beliefs triggered by interactions with AI chatbots. According to Psychology Today, the term describes cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Evidence indicates that AI psychosis can develop in people with or without a preexisting mental health issue, although the former is more common.
Blog Editors
Recent Updates
- Eliminating the GRAS Pathway: An Update
- Brand Licensing in Health Care: An Overview for Hospitals
- FDA Proposal Would Extend Food Traceability Rule’s Compliance Deadline to July 2028
- NYDFS Cybersecurity Crackdown: New Requirements Now in Force, and "Covered Entities" Include HMOs, CCRCs—Are You Compliant?
- The Case for Regular Legal Maintenance: A Litigation Readiness Mindset for Modern Health Care Organizations