Our colleagues Stuart Gerson and Daniel Fundakowski of Epstein Becker Green have a new post on SCOTUS Today that will be of interest to our readers: “Court Declines Resolving Circuit Split on What Constitutes a ‘False’ Claim, but Will Consider Legality of Trump Abortion Gag Rule.”

The following is an excerpt:

While this blog usually is confined to the analysis of the published opinions of the Supreme Court, several of this morning’s orders are worthy of discussion because of their importance to health care lawyers and policy experts. Guest editor Dan Fundakowski joins me in today’s unpacking of the Court’s rulings.

First, in Cochran v. Mayor and City Council of BaltimoreOregon v. Cochran; and American Medical Association v. Cochran, the Court granted cert. to review a regulation promulgated by the Trump Department of Health and Human Services that would bar doctors who receive federal funds for family planning services from referring patients to abortion providers. The Ninth Circuit has upheld the regulation, but the Fourth has held it unlawful and enjoined its effectuation on a nationwide basis. The ramifications of this dispute for Medicaid providers and others are obvious, and it will be a point of interest as the Biden administration moves ahead in ways substantially different from its predecessor. It could, for example, moot the cases by repealing the regulation.

Health care litigators have, for some time, urged the Court to decide whether, under the False Claims Act (“FCA”), “falsity” must be based on objectively verifiable facts. In other words, for example, does a conflict of opinion between experts negate a finding of falsity with respect to a decision as to medical necessity or coding of a health care procedure? There has been increasing division among the Circuit Courts of Appeals on this subject, and to the chagrin of practitioners, that division is going to be unresolved for some time, as the Supreme Court has denied cert. in two qui tam FCA cases that we have been closely monitoring: United States ex rel. Druding v. Care Alternatives, 952 F.3d 89 (3rd Cir. 2020) and United States ex rel. Winter v. Gardens Regional Hospital & Medical Center, Inc., 953 F.3d 1108 (9th Cir. 2020). While the FCA requires that claims be “false or fraudulent” in order to give rise to liability, the statute does not define those terms, and this has proved a major issue in dispute in the context of claims related to clinical judgments.

Click here to read the full post and more on SCOTUS Today.

The application of artificial intelligence technologies to health care delivery, coding and population management may profoundly alter the manner in which clinicians and others interact with patients, and seek reimbursement. While on one hand, AI may promote better treatment decisions and streamline onerous coding and claims submission, there are risks associated with unintended bias that may be lurking in the algorithms. AI is trained on data. To the extent that data encodes historical bias, that bias may cause unintended errors when applied to new patients. This can result in errors in utilization management, coding, billing and healthcare delivery.

The following hypothetical illustrates the problem.

A physician practice management service organization (MSO) adopts a third-party software tool to assist its personnel in make treatment decisions for both the fee-for-service population and a Medicare Advantage population for which the MSO is at financial risk. The tool is used for both pre-authorizations and ICD diagnostic coding for Medicare Advantage patients, without the need of human coders. 

 The MSO’s compliance officer observes two issues:

  1.  It appears Native American patients seeking substance abuse treatment are being approved by the MSO’s team far more frequently than other cohorts who are seeking the same care, and
  2. Since the deployment of the software, the MSO is realizing increased risk adjustment revenue attributable to a significant increase in rheumatic condition codes being identified by the AI tool.

 Though the compliance officer doesn’t have any independent studies to support it, she is comfortable that the program is making appropriate substance abuse treatment and utilization management recommendations because she believes that there may be a genetic reason why Native Americans are at greater risk than others. With regard to the diagnostic coding, she:

  1. is also comfortable with the vendor’s assurances that their software is more accurate than eyes-on coding;
  2. understands that prevalence data suggests that the elderly population in the United States likely has undiagnosed rheumatic conditions; and,
  3. finds through her own investigation that anecdotally it appears that the software, while perhaps over-inclusive, is catching some diagnoses that could have been missed by the clinician alone. 

 Is the compliance officer’s comfort warranted?

The short answer is, of course, no.

There are two fundamental issues that the compliance officer needs to identify and investigate – both related to possible bias. First, is the tool authorizing unnecessary substance use disorder treatments for Native Americans, (overutilization) and at the same time not approving medically necessary treatments for other ethnicities (underutilization)? Overutilization drives health spend and can result in payment errors, and underutilization can result in improper denials, patient harm and legal exposure. The second issue relates to the AI tool potentially “finding” diagnostic codes that, while statistically supportable based on population data the vendor used in the training set, might not be supported in the MSO’s population. This error can result in submission of unsupported codes that can drive risk adjustment payment, which can carry significant legal and financial exposure.

To learn more about the legal risks of and solutions to bias in AI, please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). To register, please click here.

The 117th Congressional health care agenda, including COVID-19 related action, will require 60 votes in the Senate or passage through budget reconciliation. In the Diagnosing Health Care Podcast, attorneys Mark Lutes, Philo Hall, and Timothy Murphy discuss the prospects for additional coronavirus relief and what that would mean for stakeholders, as well as the possibility for coverage expansion through changes to the Affordable Care Act or Medicaid.

For more information on the ongoing changes coming out of Washington, visit Epstein Becker Green’s  “First 100 Days” page.

The Diagnosing Health Care podcast series examines the business opportunities and solutions that exist despite the high-stakes legal, policy, and regulatory issues that the health care industry faces. Subscribe on your favorite podcast platform.

Listen on Apple PodcastsGoogle Podcasts,
Overcast, Spotify, Stitcher, YouTube, and Vimeo.

After a Congressional override of a Presidential veto, the National Defense Authorization Act became law on January 1, 2021 (NDAA). Notably, the NDAA not only provides appropriations for military and defense purposes but, under Division E, it also includes the most significant U.S. legislation concerning artificial intelligence (AI) to date: The National Artificial Intelligence Initiative Act of 2020 (NAIIA).

The NAIIA sets forth a multi-pronged national strategy and funding approach to spur AI research, development and innovation within the U.S., train and prepare an AI-skilled workforce for the integration of AI throughout the economy and society, and establish a pathway to position the U.S. as a global leader in the development and adoption of trustworthy AI in the public and private sectors. Importantly, the NAIIA does not set forth merely lofty goals, but rather, legislates concrete matters of critical importance for economic and national security.

With a new Administration in place, and increasing global competition to develop AI and related guidelines, this is undoubtedly a pivotal time in history. AI will continue to transform every industry and workplace, and every facet of our day-to-day lives. It is important to become familiar with the NAIIA and consider its long-term impact for society, including the legal and ethical ramifications if the goals are not met. To understand the legal, regulatory and business challenges associated with AI, all organizations should gain a better understanding of the NAIIA and keep apprised of developments as the newly formed governing bodies created under the NAIIA begin their work.

National AI Initiative

The NAIIA aims to achieve its goals through a Presidential National AI Initiative involving coordination among the civilian agencies, the Department of Defense and the Intelligence Community and by engaging the public and private sectors through various key activities, including, but not limited to:

  • Funding, cooperative agreements, testbeds, and access to data and computing resources to support research and development;
  • Educational and training programs to prepare the workforce to create, use, and interact with AI systems;
  • Interagency planning and coordination of Federal AI research, development, demonstration, and standards engagement;
  • Outreach to diverse stakeholders such as citizen groups, industry, civil rights and disability rights organizations for input on initiatives;
  • Support for a network of interdisciplinary AI research institutes; and
  • Support opportunities for international cooperation around AI research and development.

Governance Structures

To drive toward these goals, the NAIIA mandates establishment of various governance bodies. First, on January 12, 2021, pursuant to the NAIIA, the White House Office of Science and Technology Policy (OSTP) established the National Artificial Intelligence Initiative Office (AI Initiative Office). Second, the NAIIA requires the creation of the Interagency Committee and various subcommittees on AI to coordinate federal activities and create a strategic plan for AI (including with regard to research and development, education and workforce training). Third, the law mandates that the Secretary of Commerce, in consultation with the Director of OSTP, Secretary of Defense, Secretary of Energy, Secretary of State, the Attorney General, and the Director of National Intelligence, establish a National Artificial Intelligence Advisory Committee comprised of appointed members representing broad and interdisciplinary expertise and perspectives to serve as advisors to the President and the Initiative Office on matters related to the AI Initiative. Fourth, the NAIIA also requires the Director of the National Science Foundation and the OSTP to establish the National AI Research Resource Task Force.

In particular, the National Artificial Intelligence Advisory Committee will advise on research and development, ethics, standards, education, security, AI in the workplace and consequences of technological displacement, and other economic and societal issues addressed by the Initiative. Further, the body will advise on matters relating to oversight of AI using regulatory and nonregulatory approaches while balancing innovation and individual rights. The Committee will also establish a sub-committee related to AI in law enforcement and will address such issues as bias and proper usage of facial recognition, as well as data security and use of AI consistent with privacy, civil and disability rights.

National Academies AI Impact Study on the Workforce

The NAIIA also requires the National Science Foundation to contract with the National Research Council of the National Academies of Sciences, Engineering, and Medicine to conduct a study regarding the current and future impact of AI on the U.S. workforce. This study will include input from various stakeholders in the public and private sectors, and result in recommendations regarding the challenges and opportunities presented. The study will address the impact of increased use of AI, automation and related trends on the workforce, the related workforce needs and employment opportunities, and the research gaps and data needed to address these issues. The results of the study will be published in a report to several Congressional Committees and available publicly by January 1, 2023.

Funding of AI Initiatives

A hallmark of the NAIIA is its commitment to inject the economy with funding to boost AI efforts. In total, the NAIIA pumps approximately $6.4 billion dollars into AI activities under the Initiative. This funding is earmarked in a variety of ways. For example, the National Institute of Standards and Technology (NIST) received Congress’ authorization to spend almost $400 million over 5 years to support development of frameworks for research and development best practices and voluntary standards for AI trustworthiness, including:

  • Privacy and security (including for data sets used to train or test AI systems, and software and hardware used in AI systems);
  • Computer chips and hardware designed for AI systems;
  • Data management and techniques to increase usability of data;
  • Development of technical standards and guidelines to test for bias in AI training data and applications;
  • Safety and robustness of AI to withstand unexpected inputs and adversarial attacks;
  • Auditing mechanisms;
  • Applications of machine learning and AI to improve science and engineering; and
  • Model and system documentation.

NIST will also work on the creation of a risk management framework, standardized data sets for AI training, partnership with research institutes to test AI measurement standards and develop data sharing best practices.

The National Science Foundation will receive almost $4.8 billion over 5 years to fund research and education in AI systems and related fields (including K-12, undergraduate and graduate programs), to develop and deploy trustworthy AI, workforce training and development of a diverse AI workforce pipeline. The National Oceanic and Atmospheric Administration will receive $10,000,000 in 2021 towards its AI Center. Subject to the availability of funding, the Director of the National Sciences Foundation will establish a program to award financial assistance for the planning and establishment of a network of AI Institutes for research and development and attainment of related goals as set forth under the NAIIA. These AI Institutes would be eligible to receive funding in order to manage and make available data sets for training and testing AI, develop AI testbeds, conduct specific research and education activities, provide or broker access to computing resources and technical assistance, and conduct other collaborative outreach activities.

There have been tremendous advancements in the development of AI in recent years, exponentially greater than experienced in the early days of its development in the last century. AI is no longer a matter of science fiction and it is quickly becoming a mainstream reality with a major impact on every aspect of our lives. Through passage of the NAIIA, the U.S. has demonstrated its commitment to responsibly investing in the future of AI, including preparing the public, industry and the future workforce for the new world that has arrived.

To learn more about the legal risks of and solutions to bias in AI, please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). To register, please click here.

This Diagnosing Health Care Podcast episode dives into the growth of physician practices accepting risk-based payments from health plans and examines why these practices are attractive to investors. Special guest Jason Madden, Managing Director at Accordion, and Epstein Becker Green attorneys Joshua FreemireJason Christ, and Tim Murphy, discuss the health regulatory considerations investors must assess when evaluating investment opportunities with physician practices accepting risk-based payments.

To supplement the issues discussed in this episode, read our client alert series on recent regulatory changes that give more flexibility for risk-bearing entities and financial relationships:

The Diagnosing Health Care podcast series examines the business opportunities and solutions that exist despite the high-stakes legal, policy, and regulatory issues that the health care industry faces.

Listen on Apple PodcastsGoogle Podcasts,
Overcast, Spotify, Stitcher, YouTube, and Vimeo.

In this episode of the Diagnosing Health Care Podcast, dive into the Biden Administration’s first 100 days in office and the potential executive orders, regulations, and new legislation with noteworthy health care policy implications.

Epstein Becker Green attorneys Ted Kennedy, Philo Hall, and Paulina Grabczak discuss President Biden’s priorities, including his COVID-19 response plan, and examines which “midnight rules” put in place by the Trump Administration could be intercepted or retained.

The Diagnosing Health Care podcast series examines the business opportunities and solutions that exist despite the high-stakes legal, policy, and regulatory issues that the health care industry faces. Subscribe on your favorite podcast platform.

Listen on Apple PodcastsGoogle Podcasts,
Overcast, Spotify, Stitcher, YouTube, and Vimeo.

On January 14, 2021, the U.S. Department of Justice (DOJ) reported its False Claims Act (FCA) statistics for fiscal year (FY) 2020. More than $2.2 billion was recovered from both settlements and judgments in 2020, the lowest level since 2008 and almost $1 billion less than was recovered in 2019. The total recoveries in 2020 reflect the first of many anticipated resolutions of fraud enforcement actions in the COVID-19 world, and over 80% of all recoveries—amounting to almost $1.9 billion—came from the health care and life sciences industries.

HIGHEST NUMBER OF NEW FILINGS EVER REPORTED

Significantly, 2020 saw the largest number of new FCA matters initiated in a single year. The government initiated new FCA matters at its highest rate since 1994, with 250 new cases brought in 2020. Strikingly, the number of government-initiated cases against health care entities more than doubled from 2019 to 2020 and was at the highest level ever reported. Likewise, qui tam relators filed 672 new matters in FY 2020, an increase over FY 2019 and the fifth highest number of cases in reported history. Qui tam relators filed, on average, almost 13 new cases a week. Of the 672 qui tam cases filed, 68% were related to health care.

QUI TAM FILINGS CONTINUE TO BE THE DRIVER

Total recoveries from qui tam-initiated actions generated almost $1.7 billion. While the largest recoveries continue to come from cases where the government intervenes, cases pursued by relators post-declination generated more than $193 million in FY 2020, the fifth largest annual recovery in non-intervened cases since 1986. These cases continue to be rewarding for relators; over $309 million in relators’ share awards were paid in FY 2020, of which more than $261 million were paid in cases pursued against health care entities.

Continue Reading DOJ False Claims Act Statistics 2020: Over 80% of All Recoveries Came from the Health Care Industry

The Department of Justice (DOJ) announced on January 12, 2021, the first civil settlement to resolve allegations of fraud against the Paycheck Protection Program (PPP) of the Coronavirus Aid, Relief, and Economic Security (CARES) Act.[1] SlideBelts Inc. and its president and CEO, Brigham Taylor, have agreed to pay the United States a combined $100,000 in damages and penalties for alleged violations of the False Claims Act (FCA) and the Financial Institutions Reform, Recovery, and Enforcement Act (FIRREA).[2]

The CARES Act was enacted in March 2020 to provide emergency financial assistance to individuals and businesses affected by the COVID-19 pandemic.[3] The CARES Act established the PPP, which provided $349 billion in forgivable loans to small businesses in order to assist in job retention and business expenses.[4] Since March 2020, Congress has authorized an additional $585 billion in PPP spending to be distributed under the Small Business Administration (SBA).

SlideBelts operates as an online retail company, and filed a petition for relief under Chapter 11 of the Bankruptcy Code in August 2019. Between April and June of 2020, while its petition was pending in the U.S. Bankruptcy Court for the Eastern District of California, SlideBelts and Taylor allegedly made false statements to federally insured financial institutions that the company was not involved in bankruptcy proceedings in order to influence the institutions to grant, and for SBA to guarantee, a PPP loan. SlideBelts received a loan for $350,000 based off of these purported false claims, which SlideBelts repaid in full to the PPP.

The government was able to recover damages and civil penalties from SlideBelts under the FCA for submitting alleged fraudulent claims for payment to the government and under the FIRREA for violations of federal criminal statutes that affect federally insured banks. This settlement is the end result of the first, but not the last, of many civil investigations and, ultimately, litigations relative to the CARES Act in the coming months and years under the FCA. In fact, during a June address to the Chamber of Commerce, Principal Deputy Attorney General Ethan Davis stated, “Going forward, the Civil Division will make it a priority to use the False Claims Act to combat fraud in the Paycheck Protection Program.”[5]

As the SBA prepares to issue a second round of PPP loans, the DOJ is likely to continue to use the FCA and the FIRREA to pursue entities receiving funds on the theory that those entities intend to exploit for their benefit these federal programs.[6]

Continue Reading First Reported FCA CARES Act Settlement Announced

This Diagnosing Health Care episode examines the fraud and abuse enforcement landscape in the telehealth space and considers ways telehealth providers can mitigate their enforcement risks as they move into the new year. Hear how the uptick in enforcement warrants close consideration by telehealth providers, especially those that are new to the space and have not yet built their compliance infrastructures.

The episode features Epstein Becker Green attorneys Amy Lerman, Melissa Jampol, and Bonnie Scott.

The Diagnosing Health Care podcast series examines the business opportunities and solutions that exist despite the high-stakes legal, policy, and regulatory issues that the health care industry faces.

Listen on Apple PodcastsGoogle Podcasts,
Overcast, Spotify, Stitcher, YouTube, and Vimeo.

The U.S. Supreme Court will consider whether the federal government can approve state programs that force Medicaid participants to work, go to school, or volunteer to get benefits. Both Arkansas and the Justice Department sought review of the issue. Epstein Becker Green attorney Clifford Barnes provides potential paths for the Biden administration to best position itself in the case.


The U.S. Supreme Court will hear oral argument in a case involving the authority of the Department of Health and Human Services to approve Medicaid work requirements programs in Arkansas and New Hampshire that were struck down by the U.S. Court of Appeals for the District of Columbia Circuit.

The high court has agreed to determine whether the HHS can allow states to impose work requirements in its Medicaid program even though all lower courts ruled against HHS’s approval of states’ Section 1115 work requirement waivers, based on the Trump administration’s refusal to consider the impact of the waivers on the core purpose of Medicaid—which is to increase health insurance coverage.

Unlike the narrow question considered by the lower courts, however, the court granted certiorari on a much broader issue. The question presented concerns the entire Section 1115 process and asks whether the HHS secretary has the power to establish additional purposes for Medicaid, beyond coverage.

Should the court rule that the HHS secretary does indeed possess this unbounded power, the entire Section 1115 landscape could shift, potentially allowing states to implement waivers like Arkansas, so long as they meet such additional purpose.

The case establishes an effective deadline for the Biden administration to take action to mitigate or eliminate the work requirements, in light of the administration’s commitment to expanding, rather than rolling back, Medicaid insurance coverage.

Continue Reading How the Biden Administration Can Reverse Trump’s Medicaid Work Requirements