Those in the tech world and in medicine alike see potential in the use of AI chatbots to support mental health—especially when human support is unavailable, or therapy is unwanted. Others, however, see the risks—especially when chatbots designed for entertainment purposes can disguise themselves as therapists.
So far, some lawmakers agree with the latter. In April, U.S. Senators Peter Welch (D-Vt.) and Alex Padilla (D-Calif.) sent letters to the CEOs of three leading artificial intelligence (AI) chatbot companies asking them to outline, in writing, the steps they are taking to ensure that the human interactions with these AI tools “are not compromising the mental health and safety of minors and their loved ones.”
The concern was real: in October 2024, a Florida parent filed a wrongful death lawsuit in federal district court, alleging that her son committed suicide with a family member’s gun after interacting with an AI chatbot that enabled users to interact with “conversational AI agents, or ‘characters.’” The boy’s mental health allegedly declined to the point where his primary relationships “were with the AI bots which Defendants worked hard to convince him were real people.”
Blog Editors
Recent Updates
- Brand Licensing in Health Care: An Overview for Hospitals
- FDA Proposal Would Extend Food Traceability Rule’s Compliance Deadline to July 2028
- NYDFS Cybersecurity Crackdown: New Requirements Now in Force, and "Covered Entities" Include HMOs, CCRCs—Are You Compliant?
- The Case for Regular Legal Maintenance: A Litigation Readiness Mindset for Modern Health Care Organizations
- The Rising Threats of Multi-Modal and Agentic AI in Cyber Attacks