At the end of March, Florida joined the roster of states that have erected legal shields for health care providers against COVID-19-oriented liability claims. Concerned about uncertainty surrounding the emergency measures taken in response to COVID-19 and the effects that lawsuits could have on the economic recovery and the ability of health care providers to remain focused on serving the needs of their communities, the Florida Legislature passed CS/SB 72 on March 29, 2021.  Governor Ron DeSantis signed CS/SB 72 into law as Laws of Florida 2021-1.  This law creates two new statutory provisions – section 768.38 and section 768.381, Florida Statutes – effective on passage.

What Are the Liability Protections?

Section 768.381, Florida Statutes provides protection for health care providers regarding COVID-19-related claims, as follows:

  • Complaints alleging claims subject to the law must be pled with particularity, or will be dismissed. This is a higher pleading standard than typically required for a civil complaint, and requires a greater degree of specificity.
  • Plaintiffs must prove gross negligence or intentional misconduct. This is a higher standard than ordinary negligence or professional malpractice.
  • Health care providers are provided with several affirmative defenses which, if proven, preclude liability. These defenses primarily relate to a provider’s substantial compliance with government-issued standards regarding COVID-19, infectious disease generally in the absence of standards specifically applicable to COVID-19 or the inability to comply with applicable standards in light of medical supply shortages.
  • There is a one-year statute of limitations on COVID-19-related claims against health care providers, which is substantially shorter than that for simple and medical negligence claims. When this statute starts to run depends on whether the claim arises out of the transmission, diagnosis, or treatment of COVID-19, or from other circumstances such as a delayed or canceled procedure. Actions for COVID-19 related claims that accrued before the law’s effective date must commence within one year of the effective date.

Continue Reading Florida Legislature Provides COVID-19 Liability Protection for Health Care Providers

In this episode of the Diagnosing Health Care Podcast:  The Centers for Medicare & Medicaid Services (“CMS”) and the Office of Inspector General (“OIG”) of the Department of Health and Human Services have at last published their long-awaited companion final rules advancing value-based care. The rules present significant changes to the regulatory framework of the federal physician self-referral law (commonly referred to as the “Stark Law”) and to the federal health care program’s Anti-Kickback Statute, or “AKS.”

Epstein Becker Green attorneys Anjali DownsJennifer MichaelLesley Yeung, and Paulina Grabczak give an overview of the final rules and point out key issues health care companies should carefully consider as they take advantage of these value-based care safe harbors and exceptions.

The Diagnosing Health Care podcast series examines the business opportunities and solutions that exist despite the high-stakes legal, policy, and regulatory issues that the health care industry faces. Listen below and subscribe on your favorite podcast platform.

Listen on Apple PodcastsGoogle Podcasts,
Overcast, Spotify, Stitcher, YouTube, and Vimeo.

In this episode of the Diagnosing Health Care Podcast: Epstein Becker Green attorneys Mark LutesPhilo Hall, and Timothy Murphy discuss the health-specific portions of the American Rescue Plan, including increased funding for federal oversight activities, changes to public insurance programs, and what these changes might mean for stakeholders.

The Diagnosing Health Care podcast series examines the business opportunities and solutions that exist despite the high-stakes legal, policy, and regulatory issues that the health care industry faces. Listen below and subscribe on your favorite podcast platform.

Listen on Apple PodcastsGoogle Podcasts,
Overcast, Spotify, Stitcher, YouTube, and Vimeo.

New from the Diagnosing Health Care Podcast:  The Biden administration has invoked the Defense Production Act (“DPA”) to speed up the production of vaccines and increase the domestic production of COVID-19 tests, personal protective equipment (or “PPE”), and other essential supplies. Epstein Becker Green attorneys Neil Di SpiritoConstance Wilkinson, and Bonnie Odom discuss the administration’s reliance on the DPA as it continues to operationalize its pandemic response, and the challenges these actions are likely to present for medical product suppliers.

For more, listen to our previous episodes relating to vaccination and supply chain issues:

The Diagnosing Health Care podcast series examines the business opportunities and solutions that exist despite the high-stakes legal, policy, and regulatory issues that the health care industry faces. Subscribe on your favorite podcast platform.

Listen on Apple PodcastsGoogle Podcasts,
Overcast, Spotify, Stitcher, YouTube, and Vimeo.

Artificial Intelligence (“AI”) applications are powerful tools that already have been deployed by companies to improve business performance across the health care, manufacturing, retail, and banking industries, among many others. From largescale AI initiatives to smaller AI vendors, AI tools quickly are becoming a mainstream fixture in many industries and will likely infiltrate many more in the near future.

But are these companies also prepared to defend the use of AI tools should there be compliance issues at a later time? What should companies do before launching AI tools and what should companies do to continue to feel confident about compliance while the AI tools simplify and hopefully improve processes? The improper application of AI tools or the improper operation or outcomes from the AI tools can create new types of enterprise risks. While the use of AI in health care presents many opportunities, the enterprise risks that might arise need to be effectively assessed and managed.

But How?

Traditionally, to manage enterprise risk and develop their compliance programs, health care companies have relied upon the multitude of guidance that has been published by the Office of Inspector General of Health and Human Services (“OIG”) and by industry associations such as Health Care Compliance Association and other federal, state and industry-specific guidance. Specific compliance related guidance focused on the use of AI tools in health care is lacking at this time, however, the National Defense Authorization Act (NDAA), which became law on January 1, 2021, includes the most significant U.S. legislation concerning AI to date, The National Artificial Intelligence Initiative Act of 2020 (NAIIA). The NAIIA mandates establishment of various governance bodies, in particular, the National Artificial Intelligence Advisory Committee, which will advise on matters relating to oversight of AI using regulatory and nonregulatory approaches while balancing innovation and individual rights.

In the absence of specific guidance, companies can look to existing compliance program frameworks, e.g., the seven elements constituting an effective compliance program as identified by OIG, to develop a reliable and defensible compliance infrastructure. While we can lean on this existing framework as a guide, additional consideration needs to be devoted to developing an AI compliance program that is specific and customized to the particular AI solution at hand.

What policies will govern human conduct in the use and monitoring of the AI tool? Who has the authority to launch the use of the AI tool? Who has the authority to recall the AI tool? What would be the back-up service if needed? Written policies and procedures can help.

***

To learn more about the ways in which the existing policies in connection with existing corporate compliance programs can be applied to the use of AI tools,

please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). To register, please click here.

The Illinois Coalition to Protect Telehealth, a coalition of more than thirty Illinois healthcare providers and patient advocates, announced its support for a bill that would, among other things, establish payment parity for telehealth services and permanently eliminate geographic and facility restrictions beyond the COVID-19 pandemic. Like many states, Illinois issued an executive order at the outset of the pandemic temporarily lifting longstanding barriers to consumer access to telehealth via commercial health plans and Medicaid.[1]  The executive order expanded the definition of telehealth services, loosened geographical restrictions on physician licensing requirements, and barred private insurers from charging copays and deductibles for in-network telehealth visits.

Now, House Bill 3498 seeks to make permanent some of those temporary waivers by aligning coverage and reimbursement for telehealth services with in-person care. If enacted, it would also establish that patients could no longer be required to use an exclusive panel of providers or professionals to receive telehealth services, nor would they be required to prove a hardship or access barrier in order to receive those services.  The bill does not include a provision that would permanently allow out-of-state physicians or health care providers to provide services in the state beyond the pandemic.[2]

In the Coalition’s announcement of support for this bill, it states that the use of telehealth over the last year has shown an increased adherence to patient care plans and improved chronic disease management.  “In recent surveys, over 70% of Illinois hospital respondents and 78% of community-based behavioral healthcare respondents reported that telehealth has helped drive a reduction in the rates at which patients missed appointments. Surveys of Illinois physicians, community health centers, and specialized mental health and substance use disorder treatment providers have also revealed similar dramatic reductions in missed appointments.”

Continue Reading Illinois Coalition Backs Telehealth Bill Supporting Payment Parity Beyond COVID-19 Pandemic

Medical providers are often asked, or feel obligated, to disclose confidential information about patients.  This blog post discusses when disclosures of confidential medical information involve law enforcement, but the general principles discussed herein are instructive in any scenario.  To protect patient confidentiality and avoid costly civil liability arising from improper disclosures, it is imperative that providers ask questions to assess the urgency of any request and to understand for what purpose the information is sought by authorities.  Knowing what questions to ask at the outset prepares providers to make informed decisions about disclosing confidential information in a manner that balances the obligation to maintain patient confidentiality and trust with legitimate law enforcement requests for information aimed at protecting the public. Continue Reading Responding to Law Enforcement Demands for HIPAA Protected Information

Alaap B. Shah and Nivedita B. Patel, attorneys in the Health Care & Life Sciences practice, in the firm’s Washington, DC, office, co-authored an article in MobiHealthNews, titled “Unlocking Value in Health Data: Truveta’s Data Monetization Strategy Carries Big Risks and Responsibilities.”

Following is an excerpt:

In today’s world, data is power. Healthcare providers have massive amounts of rich health data at their fingertips. Yet historically, third-party vendors to healthcare providers often have derived financial benefits from secondary use of this data through aggregating and brokering de-identified data to downstream customers.

That is beginning to change as healthcare providers are taking back control of their data assets.

Truveta, Inc., a new startup led by 14 of the largest health systems in the U.S., has formed to pool together their vast and diverse data in order to take back control over how their patients’ de-identified data is shared and used. Truveta’s goal is to leverage patient data to improve patient care, address health inequity, accelerate the development of treatments and reduce the time to make a diagnosis.

The company will have access to de-identified data representing approximately 13% of patient records in the U.S. This amalgamation of data will result in more diversified data sets varying by diagnosis, geography and demographics. The process can significantly expand the opportunities for that data’s secondary analytics uses.

The success of such a massive undertaking with so many stakeholders requires good data stewardship central to the endeavor. As healthcare providers begin to leverage their data to derive knowledge and ultimately gain wisdom about how better to care for their patients, they will bear a greater responsibility to ensure the privacy and security of the health data their patients trust them to safeguard.

Failure to afford the appropriate safeguards in terms of how data is collected, aggregated, de-identified, shared and ultimately utilized could result in the demise of this sort of big data collaboration.

Click here to read the full article on MobiHealthNews.

Our colleagues Stuart Gerson and Daniel Fundakowski of Epstein Becker Green have a new post on SCOTUS Today that will be of interest to our readers: “Court Declines Resolving Circuit Split on What Constitutes a ‘False’ Claim, but Will Consider Legality of Trump Abortion Gag Rule.”

The following is an excerpt:

While this blog usually is confined to the analysis of the published opinions of the Supreme Court, several of this morning’s orders are worthy of discussion because of their importance to health care lawyers and policy experts. Guest editor Dan Fundakowski joins me in today’s unpacking of the Court’s rulings.

First, in Cochran v. Mayor and City Council of BaltimoreOregon v. Cochran; and American Medical Association v. Cochran, the Court granted cert. to review a regulation promulgated by the Trump Department of Health and Human Services that would bar doctors who receive federal funds for family planning services from referring patients to abortion providers. The Ninth Circuit has upheld the regulation, but the Fourth has held it unlawful and enjoined its effectuation on a nationwide basis. The ramifications of this dispute for Medicaid providers and others are obvious, and it will be a point of interest as the Biden administration moves ahead in ways substantially different from its predecessor. It could, for example, moot the cases by repealing the regulation.

Health care litigators have, for some time, urged the Court to decide whether, under the False Claims Act (“FCA”), “falsity” must be based on objectively verifiable facts. In other words, for example, does a conflict of opinion between experts negate a finding of falsity with respect to a decision as to medical necessity or coding of a health care procedure? There has been increasing division among the Circuit Courts of Appeals on this subject, and to the chagrin of practitioners, that division is going to be unresolved for some time, as the Supreme Court has denied cert. in two qui tam FCA cases that we have been closely monitoring: United States ex rel. Druding v. Care Alternatives, 952 F.3d 89 (3rd Cir. 2020) and United States ex rel. Winter v. Gardens Regional Hospital & Medical Center, Inc., 953 F.3d 1108 (9th Cir. 2020). While the FCA requires that claims be “false or fraudulent” in order to give rise to liability, the statute does not define those terms, and this has proved a major issue in dispute in the context of claims related to clinical judgments.

Click here to read the full post and more on SCOTUS Today.

The application of artificial intelligence technologies to health care delivery, coding and population management may profoundly alter the manner in which clinicians and others interact with patients, and seek reimbursement. While on one hand, AI may promote better treatment decisions and streamline onerous coding and claims submission, there are risks associated with unintended bias that may be lurking in the algorithms. AI is trained on data. To the extent that data encodes historical bias, that bias may cause unintended errors when applied to new patients. This can result in errors in utilization management, coding, billing and healthcare delivery.

The following hypothetical illustrates the problem.

A physician practice management service organization (MSO) adopts a third-party software tool to assist its personnel in make treatment decisions for both the fee-for-service population and a Medicare Advantage population for which the MSO is at financial risk. The tool is used for both pre-authorizations and ICD diagnostic coding for Medicare Advantage patients, without the need of human coders. 

 The MSO’s compliance officer observes two issues:

  1.  It appears Native American patients seeking substance abuse treatment are being approved by the MSO’s team far more frequently than other cohorts who are seeking the same care, and
  2. Since the deployment of the software, the MSO is realizing increased risk adjustment revenue attributable to a significant increase in rheumatic condition codes being identified by the AI tool.

 Though the compliance officer doesn’t have any independent studies to support it, she is comfortable that the program is making appropriate substance abuse treatment and utilization management recommendations because she believes that there may be a genetic reason why Native Americans are at greater risk than others. With regard to the diagnostic coding, she:

  1. is also comfortable with the vendor’s assurances that their software is more accurate than eyes-on coding;
  2. understands that prevalence data suggests that the elderly population in the United States likely has undiagnosed rheumatic conditions; and,
  3. finds through her own investigation that anecdotally it appears that the software, while perhaps over-inclusive, is catching some diagnoses that could have been missed by the clinician alone. 

 Is the compliance officer’s comfort warranted?

The short answer is, of course, no.

There are two fundamental issues that the compliance officer needs to identify and investigate – both related to possible bias. First, is the tool authorizing unnecessary substance use disorder treatments for Native Americans, (overutilization) and at the same time not approving medically necessary treatments for other ethnicities (underutilization)? Overutilization drives health spend and can result in payment errors, and underutilization can result in improper denials, patient harm and legal exposure. The second issue relates to the AI tool potentially “finding” diagnostic codes that, while statistically supportable based on population data the vendor used in the training set, might not be supported in the MSO’s population. This error can result in submission of unsupported codes that can drive risk adjustment payment, which can carry significant legal and financial exposure.

To learn more about the legal risks of and solutions to bias in AI, please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). To register, please click here.