Imagine going online to chat with someone and finding an account with a profile photo, a description of where the person lives, and a job title . . .  indicating she is a therapist. You begin chatting and discuss the highs and lows of your day among other intimate details about your life because the conversation flows easily.  Only the “person” with whom you are chatting is not a person at all; it is a “companion AI.”

Recent statistics indicate a dramatic rise in adoption of these companion AI chatbots, with 88% year-over-year growth, over $120 million in annual revenue, and 337 active apps (including 128 launched in 2025 alone).  Further statistics about pervasive adoption among youth indicate three of every four teens have used companion AI at least once, and two out of four use companion AI routinely.  In response to these trends and the potential negative impacts on mental health in particular, state legislatures are quickly stepping in to require transparency, safety and accountability to manage risks associated with this new technology, particularly as it pertains to children. 

As we noted in our October 7 blog on the subject, state legislatures are moving quickly to find solutions to the disturbing mental health issues arising from use of this technology—even as the federal push for innovation threatens to displace state AI regulation, as we reported in July. For example, New York’s S. 3008, Artificial Intelligence Companion Models, effective November 5, 2025, was one of the first laws addressing these issues.  It mandates a protocol for identifying suicidal ideation, and requires notifications at the beginning of every interaction, and every three hours thereafter, that the companion AI is not human.  California’s recent SB 243, effective July 1, 2027, adopts provisions similar to New York’s law.   

California has emerged as one of the leaders, if not the bellwether, of state AI regulations impacting virtually every private sector industry, as it seeks to impose accountability and standards to ensure the transparent, safe design and deployment of AI systems. Indeed, SB 243 is one of several laws that California Governor Gavin Newsom signed in October 2025 that relate to the protection of children online.  Spurred by concern that minors, in particular, have harmed themselves or others after becoming addicted to AI chatbots—these laws seek to prevent “AI psychosis,” a popular term, if not yet a medical diagnosis.  Like New York’s S. 3008, California’s SB 243 imposes requirements on developers of companion AI to take steps designed to reduce adverse effects on users’ mental health.  Unlike New York’s S. 3008, however, it authorizes a private right of action for persons suffering an injury as a result of noncompliance. Penalties include a fine of up to $1,000 per violation, as well as attorney’s fees and costs.

SB 243 does not impose requirements on providers or others engaged in the provision of mental health care. By contrast, as we previously noted, California’s AB 489, signed into law on October 11, does regulate the provision of mental health via companion AI.  It expands the application of existing laws relating to unlicensed health care professionals to entities developing or deploying AI. AB 489 prohibits a “person or entity who develops or deploys [an AI] system or device” from stating or implying that the AI output is provided by a licensed health care professional. We further examine California’s new laws, SB 243 and AB 489, below.

SB 243’s Disclosure and Protocol Requirements

SB 243 adds a new Chapter 22.6, Sections 22601 to 22605, to the Business and Professional Code, Division 8, imposing requirements on “operators,” meaning deployers, or persons “who [make] a companion chatbot platform available to a user in the state[.]” Operators must maintain a protocol for preventing the production of suicidal ideation, suicide, or self-harm content to the user—including through crisis service provider referrals—and publishing the details of such protocols on the operators’ website.

If the user is an adult, the operator must:

  • issue a clear and conspicuous notification indicating that the chatbot is AI and not human, if a reasonable person would be misled into believing that they are interacting with a human.

If the deployer or operator “knows [the user] is a minor,” it must:

  • disclose that users are interacting with AI;
  • provide a clear and conspicuous notification every three hours that the user should take a break and that the chatbot is AI and not human; and
  • institute reasonable measures to prevent the chatbot from producing visual material of sexually explicit conduct or directly stating that the minor should engage in sexually explicit conduct.

The law is silent as to how the deployer or operator should ascertain whether the user is an adult or minor. 

SB 243’s Reporting Requirements

SB 243 requires operators to annually report, beginning July 1, 2027, to the Office of Suicide Prevention (“OSP”) of the California Department of Public Health, the following information:

  • the number of times the operator has issued a crisis service provider referral notification described above in the preceding calendar year;
  • protocols put in place to detect, remove, and respond to instances of suicidal ideation by users; and
  • protocols put into place to prohibit a companion chatbot response about suicidal ideation or actions with the user (using evidence-based methods to measure suicidal ideation).

The OSP is then required to post data from the reports on its website.

AB 489’s Requirements Addressing Impersonation of a Licensed Professional

AB 489 adds a new Chapter 15.5 to Division 2 of the Business and Professions Code to provide that prohibited terms, letters, or phrases that misleadingly indicate or imply possession of a license or certificate to practice a health care profession—terms, letters, and phrases that are already prohibited by, for example, the state Medical Practice Act or the Dental Practice Act—are also prohibited for developers and deployers of AI or generative AI (GenAI) systems. (Note that AI and GenAI are already defined in Section 11549.64 of California’s Government Code.)

The law prohibits use of a term, letter, or phrase in the advertising or functionality of an AI or GenAI system, program, device, or similar technology that indicates or implies that the care, advice, reports, or assessments offered through the AI or GenAI technology are being provided by a natural person in possession of the appropriate license or certificate to practice as a health care professional. Each use is considered a separate violation.

Enforcement Under SB 243 and AB 489

SB 243 provides that a successful plaintiff bringing a civil action under the law may recover injunctive relief and damages equal to the greater of actual damages or $1000 per violation, as well as reasonable attorney’s fees and costs. AB 489, by contrast, subjects developers and deployers to the jurisdiction of “the appropriate health care professional licensing board or enforcement agency.”

As a result of these novel laws, developers and deployers should consult legal counsel to understand these new requirements and develop appropriate compliance mechanisms.

Other Bills that Address Impersonation and Disclosure

California has approved multiple bills relating to AI in recent weeks.  As indicated by the governor’s October 13 press release—which lists 16 signed laws—many are aimed at protecting children online. While this blog focuses on laws related to mental health chatbots, these laws do not exist in a vacuum.  California and other states are becoming more serious about regulating AI from a transparency, safety, and accountability perspective.  Further, existing federal laws and regulations administered by the U.S. Food and Drug Administration (which apply to digital therapeutic products and others), the Federal Trade Commission, and other agencies may also regulate certain AI chatbots, depending upon how they are positioned and what claims are made regarding their operation, benefits, and risks. If you have questions or need advice on how to navigate these emerging AI regulations, please reach out to the authors.

Epstein Becker Green Staff Attorney Ann W. Parks contributed to the preparation of this post.

Back to Health Law Advisor Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Health Law Advisor posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.