- Posts by Hemant Gupta
Member of the FirmWhen health care systems, artificial intelligence (AI) health care software innovators, and major brands need to navigate complex technology transactions, data ownership challenges, and intellectual property (IP ...
Imagine this scenario: a longtime patient at an ENT practice decides to leave the traffic and sprawl of a major metropolitan area for a more idyllic, rural existence elsewhere in the state. Accustomed to the familiar, top-ranked brands of excellent hospitals, however, the patient is unsure of what to expect in the new location in terms of quality of care. Fortunately, posters on the walls in the old and new locations, online websites, and postcards in the mail—with the same familiar names and logos—immediately reassure the patient that the health professionals in this new location are not only as good as those back home but are affiliated with them.
In today's competitive health care landscape, hospitals are increasingly exploring innovative ways to expand their market presence and generate additional revenue streams. One particularly effective strategy is brand licensing to urgent care facilities. Becker’s Health IT, in fact, has reported on Monigle’s rankings of the 30 most trusted health system brands for 2024 and the 25 “most human” health system brands for 2025. This post explores key opportunities, challenges, and best practices for hospital administrators considering brand licensing programs.
As we noted in our previous blog post, HealthBench, an open-source benchmark developed by OpenAI, measures model performance across realistic health care conversations, providing a comprehensive assessment of both capabilities and safety guardrails that better align with the way physicians actually practice medicine. In this post, we discuss the legal and regulatory questions HealthBench addresses, the tool’s practical applications within the health care industry, and its significance in shaping the future of artificial intelligence (AI) in medicine.
The Evolution of Health Care AI Benchmarking
Artificial Intelligence (AI) foundation models have demonstrated impressive performance on medical knowledge tests in recent years, with developers proudly announcing their systems had “passed” or even “outperformed” physicians on standardized medical licensing exams. Headlines touted AI systems achieving scores of 90% or higher on the United States Medical Licensing Examination (USMLE) and similar assessments. However, these multiple-choice evaluations presented a fundamentally misleading picture of AI readiness for health care applications. As we previously noted in our analysis of AI/ML growth in medicine, a significant gap remains between theoretical capabilities demonstrated in controlled environments and practical deployment in clinical settings.
These early benchmarks—predominantly structured as multiple-choice exams or narrow clinical questions—failed to capture how physicians actually practice medicine. Real-world medical practice involves nuanced conversations, contextual decision-making, appropriate hedging in the face of uncertainty, and patient-specific considerations that extend far beyond selecting the correct answer from a predefined list. The gap between benchmark performance and clinical reality remains largely unexamined.
Those in the tech world and in medicine alike see potential in the use of AI chatbots to support mental health—especially when human support is unavailable, or therapy is unwanted. Others, however, see the risks—especially when chatbots designed for entertainment purposes can disguise themselves as therapists.
So far, some lawmakers agree with the latter. In April, U.S. Senators Peter Welch (D-Vt.) and Alex Padilla (D-Calif.) sent letters to the CEOs of three leading artificial intelligence (AI) chatbot companies asking them to outline, in writing, the steps they are taking to ensure that the human interactions with these AI tools “are not compromising the mental health and safety of minors and their loved ones.”
The concern was real: in October 2024, a Florida parent filed a wrongful death lawsuit in federal district court, alleging that her son committed suicide with a family member’s gun after interacting with an AI chatbot that enabled users to interact with “conversational AI agents, or ‘characters.’” The boy’s mental health allegedly declined to the point where his primary relationships “were with the AI bots which Defendants worked hard to convince him were real people.”
Blog Editors
Recent Updates
- First Circuit Clarifies When Clinical Labs Can Rely on Physician Orders
- Understanding the False Claims Act Statute of Limitations—and the Debate Over the “Last Overt Act” Rule
- Podcast: The Down-Low on Data for Value-Based Enterprises and Their Participating Providers – Diagnosing Health Care
- Second Circuit Affirms Denial of Preliminary Injunction in Challenge To N.Y. Law Restricting Weight Loss and Muscle Building Supplement Sales to Minors
- The DOJ’s Bulk Sensitive Data Rule and Your Obligation to “Know Your Data”