Following CES2026, OpenAI and Anthropic announced consumer-facing generative AI products for health care.
OpenAI launched ChatGPT Health on January 7, 2026, and Anthropic followed with Claude for Healthcare on January 11, 2026. Both products allow users to connect their medical records and wellness data directly to these AI chatbots, marking a significant change from theoretical benchmark performance to deployment of consumer health applications.
In our previous blog posts on HealthBench, we examined how OpenAI’s open-source benchmark moved beyond traditional multiple-choice assessments to measure AI model performance across realistic clinical conversations. We also explored the legal and regulatory implications of such benchmarks, including practice of medicine concerns, EU AI Act compliance, and bias. The HealthBench efforts have graduated into ChatGPT Health, which signals the potential for increased investment in, and market entry by, direct-to-consumer healthcare AI companies.
This post examines what these direct-to-consumer healthcare AI platforms mean for patients, providers, and healthcare organizations.
Overview of the New Platforms
ChatGPT Health and ChatGPT for Healthcare
ChatGPT Health is a dedicated space within OpenAI’s ChatGPT platform where users can connect their medical records and wellness applications. Through a partnership with b.well, users can link electronic health records from U.S. healthcare providers, along with data from Apple Health, Function, MyFitnessPal, Weight Watchers, and other platforms. According to OpenAI, over 230 million people globally already ask health and wellness questions on ChatGPT each week.
In an effort to emphasize privacy protection, the platform stores health-specific conversations, connected apps, and uploaded files separately from other chats. Further, ChatGPT Health conversations purportedly are not used to train OpenAI’s foundation models and have additional encryption beyond the platform’s standard protections. Several major hospitals have already started rolling out ChatGPT Health across their teams.
OpenAI also announced ChatGPT for Healthcare, a separate enterprise product for healthcare organizations that runs on GPT-5 models and includes HIPAA-compliant options with customer-managed encryption keys. This product was specifically built for healthcare workflows and was evaluated by physicians using HealthBench.
Claude for Healthcare
Anthropic’s Claude for Healthcare builds on its earlier Claude for Life Sciences release, adding connectors to industry-standard systems: the Centers for Medicare & Medicaid Services (“CMS”) Coverage Database for coverage determinations, the ICD-10 classification system for medical coding, the National Provider Identifier Registry for credentialing and verification, and PubMed for access to over 35 million pieces of biomedical literature.
On the consumer side, subscribers can connect HealthEx, Function, Apple Health, and Android Health Connect to give Claude access to their lab results, data from wearables, and health records. Anthropic has also stated that health data is not used to train models and that users control what information they share.
Legal and Risk Issues to Consider with Direct-to-Consumer Health AI
While ChatGPT Health and Claude for Healthcare offer useful capabilities, they also present risks and limitations that users, healthcare providers, and organizations should evaluate before adoption.
Buyer Beware: Vendor-Friendly Terms and Conditions
Users should read the terms of service carefully, as both platforms disclaim liability for the accuracy of health-related outputs and state that their products are not intended for diagnosis or treatment. OpenAI’s terms note that ChatGPT Health "is not intended for use in the diagnosis or treatment of any health condition," and Anthropic similarly directs users to healthcare professionals for personalized guidance.
These terms place the burden of evaluating AI-generated health information on the user. The relationship between a patient and a licensed healthcare provider carries professional duties, malpractice liability, and regulatory oversight; however, the relationship between a user and a consumer AI platform is governed solely by the contract. Users who rely on AI-generated health information do so largely at their own risk, with limited legal recourse if that information proves inaccurate or harmful.
Taking the Human Out of the Loop
Both platforms state that their consumer products support, rather than replace, medical care. However, the practical effect of integrating medical records with AI chatbots is a shift in how patients educate themselves and determine when to seek professional opinions. When users can ask an AI to interpret their lab results, explain their diagnoses, or suggest questions for their doctors, they may be less inclined to seek professional guidance, and they may also arrive at appointments with AI-driven expectations that differ from their clinician’s assessment.
This is not the first time consumer health technology has raised concerns about patients bypassing clinicians. When WebMD emerged, critics worried that patients would self-diagnose based on generic articles. However, these AI tools go considerably further in several respects. WebMD provided the same static content to every user, whereas ChatGPT Health and Claude for Healthcare integrate with individual medical records, lab results, and wearable data to generate personalized responses. Additionally, WebMD was a reference site users read for a few minutes, while these platforms are conversational and designed for extended back-and-forth dialogue that can feel like consulting a professional. Furthermore, while WebMD articles were authored and reviewed by humans, large language models can hallucinate and generate plausible but incorrect information. The combination of personalization, conversational engagement, and access to a user’s complete health history may lead consumers to place greater trust in these tools than they did in earlier consumer health resources. Though neither platform has a clinician reviewing responses in real time, both platforms leverage enormous physician reviewed databases – for example, OpenAI’s process involved working with over 260 physicians across 60 countries to review more than 600,000 model outputs during development.
Privacy Protections Beyond HIPAA
HIPAA applies to clinical and professional healthcare settings, but it does not typically apply to direct-to-consumer applications such as ChatGPT Health and Claude for Healthcare. These platforms operate outside the privacy protections that govern traditional healthcare relationships.
However, other legal frameworks may still apply. Federal consumer protection laws, such as Section 5 of the Federal Trade Commission (“FTC”) Act and the FTC’s Health Breach Notification Rules, may afford protections around deceptive practices and privacy. Likewise, state privacy laws like the California Consumer Privacy Act (“CCPA”) provide rights regarding the collection, use, and sale of personal health information. Additionally, state consumer protection laws, including prohibitions on unfair and deceptive trade practices, may apply to representations AI companies make about data handling and security. Emerging AI transparency laws at both the state and federal level may also impose disclosure requirements about how AI systems process health data.
Both ChatGPT Health and Claude for Healthcare implement privacy measures, including compartmentalized storage, encryption, exclusion from model training, and user-controlled permissions. However, these protections are voluntary and contractual rather than mandated by healthcare-specific privacy regulations, and data shared with these platforms could still be subject to subpoenas, court orders, or data breaches--a risk that consumers are unlikely to fully appreciate.
Cybersecurity Risks
Cybersecurity remains a concern with health data, and the aggregation of medical information within AI platforms creates attractive targets for bad actors. As we have previously discussed in our coverage of cybersecurity in healthcare, the healthcare sector continues to face data breach risks affecting millions of patients annually.
Concentrating medical records, wellness data, and health conversations within consumer AI platforms introduces new opportunities for hackers. As we previously reported here, generative AI tools are increasingly being used to assist in hacking and social engineering attacks. While both OpenAI and Anthropic tout their security measures, any system storing sensitive health data presents cybersecurity risks that users should weigh against the platform’s benefits.
Risks in Sensitive Contexts: Mental Health and Beyond
Generative AI chatbots in certain healthcare contexts, particularly mental health, have been alleged to cause harm. Litigation against Character.AI has raised claims that AI chatbot interactions contributed to self-harm and harm to others, including cases involving minors, and these lawsuits underscore the risks when AI systems engage with vulnerable users on sensitive health topics without clinical oversight.
As we discussed in our previous coverage of Utah’s AI mental health chatbot law, states are beginning to regulate AI chatbots that engage users in mental health conversations. Utah’s H.B. 452, which took effect in May 2025, imposes disclosure requirements, restricts the use and sale of personal information, limits advertising, and authorizes enforcement by the state’s Division of Consumer Protection. The law applies to AI technology that uses generative AI to engage in conversations similar to those a user would have with a licensed mental health therapist, and other states are likely to follow as harms from AI chatbot interactions continue to surface in litigation and news reports.
ChatGPT Health and Claude for Healthcare include disclaimers and direct users to healthcare professionals. However, conversational AI can simulate empathetic engagement and maintain extended dialogues, which may encourage users to share sensitive information or rely on AI responses beyond what the platforms are designed to handle. Users and healthcare organizations should be especially cautious about AI in mental health contexts, where incorrect or poorly calibrated responses carry serious consequences.
A Testing Ground for Consumer Health AI
ChatGPT Health and Claude for Healthcare will likely serve as testing grounds for determining where consumers are comfortable engaging with generative AI for health purposes.
Some use cases may be relatively low-risk, such as explaining what a lab value means, helping users prepare questions for an upcoming appointment, or tracking fitness metrics over time. Other use cases are more problematic, including interpreting complex symptoms, managing chronic conditions, or addressing mental health concerns. As millions of users interact with these platforms, patterns will emerge about where consumer health AI adds value, where it falls short, and where clinician involvement remains necessary.
This real-world data will inform product development by AI companies and regulatory approaches by policymakers trying to balance innovation with patient protection. The benchmarks like HealthBench that we discussed in our previous posts measure AI capabilities in controlled settings, whereas these consumer deployments will show how AI performs in actual healthcare decisions.
Legal and Regulatory Framework
Practice of Medicine and Scope Limitations
Both platforms disclaim that their consumer products are not intended for diagnosis or treatment, and these disclaimers help draw the line between providing general health information and engaging in activities that could constitute unlicensed practice of medicine. As we noted in our previous analysis, HealthBench provides metrics for assessing when an AI system might cross this line.
However, integration with personal medical records raises the stakes considerably. When an AI system has access to a user’s complete medical history, lab results, and ongoing health metrics, its responses become more personalized and potentially closer to what regulators might consider medical advice. State medical boards may scrutinize whether AI systems with access to comprehensive patient data are operating as clinical decision support tools or as direct-to-consumer health resources, a distinction with significant regulatory implications.
EU AI Act and International Considerations
ChatGPT Health is launching outside of the European Economic Area, Switzerland, and the United Kingdom, where the EU AI Act’s (“Act”) requirements for high-risk AI systems apply. Under the Act, AI systems used in healthcare contexts may be classified as "high-risk," triggering requirements for risk management, technical documentation, human oversight, and transparency.
As we discussed in our analysis of HealthBench’s regulatory implications, evaluation frameworks provide metrics relevant to demonstrating compliance with these requirements, and both OpenAI and Anthropic have emphasized physician validation in developing their healthcare products, which may prove useful as they work toward meeting the Act’s requirements for healthcare AI systems.
Healthcare Industry Implementation
Prior Authorization and Administrative Tasks
Both platforms position administrative efficiency as a selling point for enterprise users. Anthropic’s Claude for Healthcare targets prior authorization workflows, as the CMS Coverage Database connector lets Claude verify coverage requirements, support prior authorization checks, and help build claims appeals. OpenAI’s ChatGPT for Healthcare similarly aims to reduce administrative burden so healthcare workers can spend more time with patients.
Prior authorization requests can take hours to review, slowing patient access to care while frustrating payers and providers, and AI tools that automate information gathering and cross-referencing may help address this burden. However, organizations should ensure that human oversight remains in place for coverage determinations that affect patient care and that AI-assisted workflows comply with applicable utilization review and medical necessity requirements.
Healthcare organizations considering these tools should conduct due diligence on compliance capabilities, ensure appropriate contractual protections, and implement governance frameworks that maintain clinician oversight of AI-assisted decisions.
Conclusion
The launch of ChatGPT Health and Claude for Healthcare offers consumers powerful new AI tools that leverage health and wellness data in an interface that is easy to interact with, allows for follow-up questions, and works from the comfort of a phone or computer in the user’s own preferred language.
However, users and organizations need to understand the limitations of these systems. Consumer use is governed by vendor-friendly terms that place responsibility on users, and these platforms move clinical judgment further from patient encounters, raising questions about clinician oversight. Privacy protections operate largely outside HIPAA, relying instead on state privacy laws, consumer protection frameworks, and emerging AI transparency requirements. Cybersecurity risks are real, and the use of conversational AI in sensitive contexts like mental health has already led to litigation alleging harmful outcomes.
These platforms will serve as testing grounds for understanding where consumer health AI adds value and where traditional clinician involvement remains necessary. Healthcare organizations, policymakers, and users should watch developments closely, keep appropriate guardrails in place, and engage with emerging regulatory frameworks as they develop.
Blog Editors
Authors
- Member of the Firm
- Associate
- Member of the Firm