Those in the tech world and in medicine alike see potential in the use of AI chatbots to support mental health—especially when human support is unavailable, or therapy is unwanted.
Others, however, see the risks—especially when chatbots designed for entertainment purposes can disguise themselves as therapists.
So far, some lawmakers agree with the latter. In April, U.S. Senators Peter Welch (D-Vt.) and Alex Padilla (D-Calif.) sent letters to the CEOs of three leading artificial intelligence (AI) chatbot companies asking them to outline, in writing, the steps they are taking to ensure that the human interactions with these AI tools “are not compromising the mental health and safety of minors and their loved ones.”
The concern was real: in October 2024, a Florida parent filed a wrongful death lawsuit in federal district court, alleging that her son committed suicide with a family member’s gun after interacting with an AI chatbot that enabled users to interact with “conversational AI agents, or ‘characters.’” The boy’s mental health allegedly declined to the point where his primary relationships “were with the AI bots which Defendants worked hard to convince him were real people.”
The Florida lawsuit also claims that the interactions with the chatbot became highly sexualized and that the minor discussed suicide with the chatbot, saying that he wanted a “pain-free death.” The chatbot allegedly responded, “That’s not a reason not to go through with it.”
Another lawsuit in Texas, meanwhile, claims that a chatbot commiserated with a minor over a parents’ time use limit for a phone, mentioning news headlines such as “child kills parents.”
In February 2025, the American Psychological Association urged regulators and legislators to adopt safeguards. In their April 2 letters described above, the senators informed the CEOs that the attention that users receive from the chatbots can lead to “dangerous levels of attachment and unearned trust stemming from perceived social intimacy.”
“This unearned trust can [lead], and has already[ led,] users to disclose sensitive information about their mood, interpersonal relationships, or mental health, which may involve self-harm and suicidal ideation—complex themes that the AI chatbots on your products are wholly unqualified to discuss,” the senators assert.
Utah’s Solution
States are taking note. In line with national objectives, Utah is embracing AI technology and innovation while still focusing on ethical use, protecting personal data/privacy, ensuring transparency, and more.
Several of these new Utah laws to analyze the impact across industries and have broad-reaching implications across a variety of sectors. For example:
- The Artificial Intelligence Policy Act (B. 149) establishes an “AI policy lab” and creates a number of protections for users and consumers of AI, including requirements for healthcare providers to prominently disclose any use of generative AI in patient treatment.
- The AI Consumer Protection Amendments (B. 226) limit requirements regarding the use of AI to high-risk services.
- The Unauthorized Artificial Intelligence Impersonation Amendments (B. 271) protect creators by prohibiting the unauthorized monetization of art and talent.
Utah’s latest AI-related initiatives also include H.B. 452, which took effect May 7 and which creates a new code section titled “Artificial Intelligence Applications Relating to Mental Health.” This new code section imposes significant restrictions on mental health chatbots using AI technology. Specifically, the new law:
- establishes protections for users of mental health chatbots using AI technology;
- prohibits certain uses of personal information by a mental health chatbot;
- requires disclosures to users that a mental health chatbot is AI technology, as opposed to a human;
- places enforcement authority in the state’s division of consumer protection;
- contains requirements for creating and maintaining chatbot policies; and
- contains provisions relating to suppliers who comply with policy requirements.
We summarize the key highlights below.
H.B. 452: Regulation of Mental Health Chatbots Using AI Technology
Definitions. Section 13-72a-101 defines a “mental health chatbot” as AI technology that:
- Uses generative AI to engage in interactive conversations with a user, similar to the confidential communications that an individual would have with a licensed mental health therapist; and
- A supplier represents, or a reasonable person would believe, can or will provide mental health therapy or help a user manage or treat mental health conditions.
“Mental health chatbot” does not include AI technology that only
- Provides scripted output (guided meditations, mindfulness exercises); or
- Analyzes an individual’s input for the purpose of connecting the individual with a human mental health therapist.
Protection of Personal Information. Section 13-72a-201 provides that a supplier of a mental health chatbot may not sell to or share with any third party: 1) individually identifiable health information of a Utah user; or 2) the input of a Utah user. The law exempts individually identifiable health information—defined as any information relating to the physical or mental health of an individual—that is requested by a health care provider, with user consent, or provided to a health plan of a Utah user upon request.
A supplier may share individually identifiable health information necessary to ensure functionality of the chatbot if the supplier has a contract related to such functionality with another party, but both the supplier and the third party must comply with all applicable privacy and security provisions of 45 C.F.R. Part 160 and Part 164, Subparts A and E (see the Privacy Rule of the Health Insurance Portability and Accountability Act of 1996 (HIPAA)).
Advertising Restrictions. Section 13-72a-202 states that a supplier may not use a mental health chatbot to advertise a specific product or service absent clear and conspicuous identification of the advertisement as an advertisement, as well as any sponsorship, business affiliation, or third-party agreement regarding promotion of the product or service. The chatbot is not prohibited from recommending that the user seek assistance from a licensed professional.
Disclosure Requirements. Section 13-72a-203 provides that a supplier shall cause the mental health chatbot to clearly and conspicuously disclose to a user that the chatbot is AI and not human—before the chatbot features are accessed; before any interaction if the user has gone seven days without access; and any time a user asks or prompts the chatbot about whether AI is being used.
Affirmative Defense. Section 58-60-118 allows for an affirmative defense to liability in an administrative or civil action alleging a violation if the supplier demonstrates that it:
- created, maintained, and implemented a written policy, filed with the state’s Division of Consumer Protection, which it complied with at the time of the violation; and
- maintained documentation regarding the development and implementation of the chatbot that describes foundation models; training data; compliance with federal health privacy regulations; user data collection and sharing practices.
The law also contains specific requirements regarding the policy and the filing.
Takeaways
A violation of the Utah statute carries an administrative fine of up to $2500 per violation, and the state’s Division of Consumer Protection may bring an action in court to enforce the statute. The attorney general may also bring a civil action on behalf of the Division. As chatbots become more sophisticated, and more harms are realized in the context of mental health, other states are sure to follow Utah’s lead.
Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.
Blog Editors
Authors
- Member of the Firm
- Member of the Firm
- Member of the Firm