In this episode of the Diagnosing Health Care Podcast: Epstein Becker Green attorneys Mark LutesPhilo Hall, and Timothy Murphy discuss the health-specific portions of the American Rescue Plan, including increased funding for federal oversight activities, changes to public insurance programs, and what these changes might mean for stakeholders.

The Diagnosing Health Care podcast series examines the business opportunities and solutions that exist despite the high-stakes legal, policy, and regulatory issues that the health care industry faces. Listen below and subscribe on your favorite podcast platform.

Listen on Apple PodcastsGoogle Podcasts,
Overcast, Spotify, Stitcher, YouTube, and Vimeo.

New from the Diagnosing Health Care Podcast:  The Biden administration has invoked the Defense Production Act (“DPA”) to speed up the production of vaccines and increase the domestic production of COVID-19 tests, personal protective equipment (or “PPE”), and other essential supplies. Epstein Becker Green attorneys Neil Di SpiritoConstance Wilkinson, and Bonnie Odom discuss the administration’s reliance on the DPA as it continues to operationalize its pandemic response, and the challenges these actions are likely to present for medical product suppliers.

For more, listen to our previous episodes relating to vaccination and supply chain issues:

The Diagnosing Health Care podcast series examines the business opportunities and solutions that exist despite the high-stakes legal, policy, and regulatory issues that the health care industry faces. Subscribe on your favorite podcast platform.

Listen on Apple PodcastsGoogle Podcasts,
Overcast, Spotify, Stitcher, YouTube, and Vimeo.

Artificial Intelligence (“AI”) applications are powerful tools that already have been deployed by companies to improve business performance across the health care, manufacturing, retail, and banking industries, among many others. From largescale AI initiatives to smaller AI vendors, AI tools quickly are becoming a mainstream fixture in many industries and will likely infiltrate many more in the near future.

But are these companies also prepared to defend the use of AI tools should there be compliance issues at a later time? What should companies do before launching AI tools and what should companies do to continue to feel confident about compliance while the AI tools simplify and hopefully improve processes? The improper application of AI tools or the improper operation or outcomes from the AI tools can create new types of enterprise risks. While the use of AI in health care presents many opportunities, the enterprise risks that might arise need to be effectively assessed and managed.

But How?

Traditionally, to manage enterprise risk and develop their compliance programs, health care companies have relied upon the multitude of guidance that has been published by the Office of Inspector General of Health and Human Services (“OIG”) and by industry associations such as Health Care Compliance Association and other federal, state and industry-specific guidance. Specific compliance related guidance focused on the use of AI tools in health care is lacking at this time, however, the National Defense Authorization Act (NDAA), which became law on January 1, 2021, includes the most significant U.S. legislation concerning AI to date, The National Artificial Intelligence Initiative Act of 2020 (NAIIA). The NAIIA mandates establishment of various governance bodies, in particular, the National Artificial Intelligence Advisory Committee, which will advise on matters relating to oversight of AI using regulatory and nonregulatory approaches while balancing innovation and individual rights.

In the absence of specific guidance, companies can look to existing compliance program frameworks, e.g., the seven elements constituting an effective compliance program as identified by OIG, to develop a reliable and defensible compliance infrastructure. While we can lean on this existing framework as a guide, additional consideration needs to be devoted to developing an AI compliance program that is specific and customized to the particular AI solution at hand.

What policies will govern human conduct in the use and monitoring of the AI tool? Who has the authority to launch the use of the AI tool? Who has the authority to recall the AI tool? What would be the back-up service if needed? Written policies and procedures can help.

***

To learn more about the ways in which the existing policies in connection with existing corporate compliance programs can be applied to the use of AI tools,

please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). To register, please click here.

The Illinois Coalition to Protect Telehealth, a coalition of more than thirty Illinois healthcare providers and patient advocates, announced its support for a bill that would, among other things, establish payment parity for telehealth services and permanently eliminate geographic and facility restrictions beyond the COVID-19 pandemic. Like many states, Illinois issued an executive order at the outset of the pandemic temporarily lifting longstanding barriers to consumer access to telehealth via commercial health plans and Medicaid.[1]  The executive order expanded the definition of telehealth services, loosened geographical restrictions on physician licensing requirements, and barred private insurers from charging copays and deductibles for in-network telehealth visits.

Now, House Bill 3498 seeks to make permanent some of those temporary waivers by aligning coverage and reimbursement for telehealth services with in-person care. If enacted, it would also establish that patients could no longer be required to use an exclusive panel of providers or professionals to receive telehealth services, nor would they be required to prove a hardship or access barrier in order to receive those services.  The bill does not include a provision that would permanently allow out-of-state physicians or health care providers to provide services in the state beyond the pandemic.[2]

In the Coalition’s announcement of support for this bill, it states that the use of telehealth over the last year has shown an increased adherence to patient care plans and improved chronic disease management.  “In recent surveys, over 70% of Illinois hospital respondents and 78% of community-based behavioral healthcare respondents reported that telehealth has helped drive a reduction in the rates at which patients missed appointments. Surveys of Illinois physicians, community health centers, and specialized mental health and substance use disorder treatment providers have also revealed similar dramatic reductions in missed appointments.”

Continue Reading Illinois Coalition Backs Telehealth Bill Supporting Payment Parity Beyond COVID-19 Pandemic

Medical providers are often asked, or feel obligated, to disclose confidential information about patients.  This blog post discusses when disclosures of confidential medical information involve law enforcement, but the general principles discussed herein are instructive in any scenario.  To protect patient confidentiality and avoid costly civil liability arising from improper disclosures, it is imperative that providers ask questions to assess the urgency of any request and to understand for what purpose the information is sought by authorities.  Knowing what questions to ask at the outset prepares providers to make informed decisions about disclosing confidential information in a manner that balances the obligation to maintain patient confidentiality and trust with legitimate law enforcement requests for information aimed at protecting the public. Continue Reading Responding to Law Enforcement Demands for HIPAA Protected Information

Alaap B. Shah and Nivedita B. Patel, attorneys in the Health Care & Life Sciences practice, in the firm’s Washington, DC, office, co-authored an article in MobiHealthNews, titled “Unlocking Value in Health Data: Truveta’s Data Monetization Strategy Carries Big Risks and Responsibilities.”

Following is an excerpt:

In today’s world, data is power. Healthcare providers have massive amounts of rich health data at their fingertips. Yet historically, third-party vendors to healthcare providers often have derived financial benefits from secondary use of this data through aggregating and brokering de-identified data to downstream customers.

That is beginning to change as healthcare providers are taking back control of their data assets.

Truveta, Inc., a new startup led by 14 of the largest health systems in the U.S., has formed to pool together their vast and diverse data in order to take back control over how their patients’ de-identified data is shared and used. Truveta’s goal is to leverage patient data to improve patient care, address health inequity, accelerate the development of treatments and reduce the time to make a diagnosis.

The company will have access to de-identified data representing approximately 13% of patient records in the U.S. This amalgamation of data will result in more diversified data sets varying by diagnosis, geography and demographics. The process can significantly expand the opportunities for that data’s secondary analytics uses.

The success of such a massive undertaking with so many stakeholders requires good data stewardship central to the endeavor. As healthcare providers begin to leverage their data to derive knowledge and ultimately gain wisdom about how better to care for their patients, they will bear a greater responsibility to ensure the privacy and security of the health data their patients trust them to safeguard.

Failure to afford the appropriate safeguards in terms of how data is collected, aggregated, de-identified, shared and ultimately utilized could result in the demise of this sort of big data collaboration.

Click here to read the full article on MobiHealthNews.

Our colleagues Stuart Gerson and Daniel Fundakowski of Epstein Becker Green have a new post on SCOTUS Today that will be of interest to our readers: “Court Declines Resolving Circuit Split on What Constitutes a ‘False’ Claim, but Will Consider Legality of Trump Abortion Gag Rule.”

The following is an excerpt:

While this blog usually is confined to the analysis of the published opinions of the Supreme Court, several of this morning’s orders are worthy of discussion because of their importance to health care lawyers and policy experts. Guest editor Dan Fundakowski joins me in today’s unpacking of the Court’s rulings.

First, in Cochran v. Mayor and City Council of BaltimoreOregon v. Cochran; and American Medical Association v. Cochran, the Court granted cert. to review a regulation promulgated by the Trump Department of Health and Human Services that would bar doctors who receive federal funds for family planning services from referring patients to abortion providers. The Ninth Circuit has upheld the regulation, but the Fourth has held it unlawful and enjoined its effectuation on a nationwide basis. The ramifications of this dispute for Medicaid providers and others are obvious, and it will be a point of interest as the Biden administration moves ahead in ways substantially different from its predecessor. It could, for example, moot the cases by repealing the regulation.

Health care litigators have, for some time, urged the Court to decide whether, under the False Claims Act (“FCA”), “falsity” must be based on objectively verifiable facts. In other words, for example, does a conflict of opinion between experts negate a finding of falsity with respect to a decision as to medical necessity or coding of a health care procedure? There has been increasing division among the Circuit Courts of Appeals on this subject, and to the chagrin of practitioners, that division is going to be unresolved for some time, as the Supreme Court has denied cert. in two qui tam FCA cases that we have been closely monitoring: United States ex rel. Druding v. Care Alternatives, 952 F.3d 89 (3rd Cir. 2020) and United States ex rel. Winter v. Gardens Regional Hospital & Medical Center, Inc., 953 F.3d 1108 (9th Cir. 2020). While the FCA requires that claims be “false or fraudulent” in order to give rise to liability, the statute does not define those terms, and this has proved a major issue in dispute in the context of claims related to clinical judgments.

Click here to read the full post and more on SCOTUS Today.

The application of artificial intelligence technologies to health care delivery, coding and population management may profoundly alter the manner in which clinicians and others interact with patients, and seek reimbursement. While on one hand, AI may promote better treatment decisions and streamline onerous coding and claims submission, there are risks associated with unintended bias that may be lurking in the algorithms. AI is trained on data. To the extent that data encodes historical bias, that bias may cause unintended errors when applied to new patients. This can result in errors in utilization management, coding, billing and healthcare delivery.

The following hypothetical illustrates the problem.

A physician practice management service organization (MSO) adopts a third-party software tool to assist its personnel in make treatment decisions for both the fee-for-service population and a Medicare Advantage population for which the MSO is at financial risk. The tool is used for both pre-authorizations and ICD diagnostic coding for Medicare Advantage patients, without the need of human coders. 

 The MSO’s compliance officer observes two issues:

  1.  It appears Native American patients seeking substance abuse treatment are being approved by the MSO’s team far more frequently than other cohorts who are seeking the same care, and
  2. Since the deployment of the software, the MSO is realizing increased risk adjustment revenue attributable to a significant increase in rheumatic condition codes being identified by the AI tool.

 Though the compliance officer doesn’t have any independent studies to support it, she is comfortable that the program is making appropriate substance abuse treatment and utilization management recommendations because she believes that there may be a genetic reason why Native Americans are at greater risk than others. With regard to the diagnostic coding, she:

  1. is also comfortable with the vendor’s assurances that their software is more accurate than eyes-on coding;
  2. understands that prevalence data suggests that the elderly population in the United States likely has undiagnosed rheumatic conditions; and,
  3. finds through her own investigation that anecdotally it appears that the software, while perhaps over-inclusive, is catching some diagnoses that could have been missed by the clinician alone. 

 Is the compliance officer’s comfort warranted?

The short answer is, of course, no.

There are two fundamental issues that the compliance officer needs to identify and investigate – both related to possible bias. First, is the tool authorizing unnecessary substance use disorder treatments for Native Americans, (overutilization) and at the same time not approving medically necessary treatments for other ethnicities (underutilization)? Overutilization drives health spend and can result in payment errors, and underutilization can result in improper denials, patient harm and legal exposure. The second issue relates to the AI tool potentially “finding” diagnostic codes that, while statistically supportable based on population data the vendor used in the training set, might not be supported in the MSO’s population. This error can result in submission of unsupported codes that can drive risk adjustment payment, which can carry significant legal and financial exposure.

To learn more about the legal risks of and solutions to bias in AI, please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). To register, please click here.

The 117th Congressional health care agenda, including COVID-19 related action, will require 60 votes in the Senate or passage through budget reconciliation. In the Diagnosing Health Care Podcast, attorneys Mark Lutes, Philo Hall, and Timothy Murphy discuss the prospects for additional coronavirus relief and what that would mean for stakeholders, as well as the possibility for coverage expansion through changes to the Affordable Care Act or Medicaid.

For more information on the ongoing changes coming out of Washington, visit Epstein Becker Green’s  “First 100 Days” page.

The Diagnosing Health Care podcast series examines the business opportunities and solutions that exist despite the high-stakes legal, policy, and regulatory issues that the health care industry faces. Subscribe on your favorite podcast platform.

Listen on Apple PodcastsGoogle Podcasts,
Overcast, Spotify, Stitcher, YouTube, and Vimeo.

After a Congressional override of a Presidential veto, the National Defense Authorization Act became law on January 1, 2021 (NDAA). Notably, the NDAA not only provides appropriations for military and defense purposes but, under Division E, it also includes the most significant U.S. legislation concerning artificial intelligence (AI) to date: The National Artificial Intelligence Initiative Act of 2020 (NAIIA).

The NAIIA sets forth a multi-pronged national strategy and funding approach to spur AI research, development and innovation within the U.S., train and prepare an AI-skilled workforce for the integration of AI throughout the economy and society, and establish a pathway to position the U.S. as a global leader in the development and adoption of trustworthy AI in the public and private sectors. Importantly, the NAIIA does not set forth merely lofty goals, but rather, legislates concrete matters of critical importance for economic and national security.

With a new Administration in place, and increasing global competition to develop AI and related guidelines, this is undoubtedly a pivotal time in history. AI will continue to transform every industry and workplace, and every facet of our day-to-day lives. It is important to become familiar with the NAIIA and consider its long-term impact for society, including the legal and ethical ramifications if the goals are not met. To understand the legal, regulatory and business challenges associated with AI, all organizations should gain a better understanding of the NAIIA and keep apprised of developments as the newly formed governing bodies created under the NAIIA begin their work.

National AI Initiative

The NAIIA aims to achieve its goals through a Presidential National AI Initiative involving coordination among the civilian agencies, the Department of Defense and the Intelligence Community and by engaging the public and private sectors through various key activities, including, but not limited to:

  • Funding, cooperative agreements, testbeds, and access to data and computing resources to support research and development;
  • Educational and training programs to prepare the workforce to create, use, and interact with AI systems;
  • Interagency planning and coordination of Federal AI research, development, demonstration, and standards engagement;
  • Outreach to diverse stakeholders such as citizen groups, industry, civil rights and disability rights organizations for input on initiatives;
  • Support for a network of interdisciplinary AI research institutes; and
  • Support opportunities for international cooperation around AI research and development.

Governance Structures

To drive toward these goals, the NAIIA mandates establishment of various governance bodies. First, on January 12, 2021, pursuant to the NAIIA, the White House Office of Science and Technology Policy (OSTP) established the National Artificial Intelligence Initiative Office (AI Initiative Office). Second, the NAIIA requires the creation of the Interagency Committee and various subcommittees on AI to coordinate federal activities and create a strategic plan for AI (including with regard to research and development, education and workforce training). Third, the law mandates that the Secretary of Commerce, in consultation with the Director of OSTP, Secretary of Defense, Secretary of Energy, Secretary of State, the Attorney General, and the Director of National Intelligence, establish a National Artificial Intelligence Advisory Committee comprised of appointed members representing broad and interdisciplinary expertise and perspectives to serve as advisors to the President and the Initiative Office on matters related to the AI Initiative. Fourth, the NAIIA also requires the Director of the National Science Foundation and the OSTP to establish the National AI Research Resource Task Force.

In particular, the National Artificial Intelligence Advisory Committee will advise on research and development, ethics, standards, education, security, AI in the workplace and consequences of technological displacement, and other economic and societal issues addressed by the Initiative. Further, the body will advise on matters relating to oversight of AI using regulatory and nonregulatory approaches while balancing innovation and individual rights. The Committee will also establish a sub-committee related to AI in law enforcement and will address such issues as bias and proper usage of facial recognition, as well as data security and use of AI consistent with privacy, civil and disability rights.

National Academies AI Impact Study on the Workforce

The NAIIA also requires the National Science Foundation to contract with the National Research Council of the National Academies of Sciences, Engineering, and Medicine to conduct a study regarding the current and future impact of AI on the U.S. workforce. This study will include input from various stakeholders in the public and private sectors, and result in recommendations regarding the challenges and opportunities presented. The study will address the impact of increased use of AI, automation and related trends on the workforce, the related workforce needs and employment opportunities, and the research gaps and data needed to address these issues. The results of the study will be published in a report to several Congressional Committees and available publicly by January 1, 2023.

Funding of AI Initiatives

A hallmark of the NAIIA is its commitment to inject the economy with funding to boost AI efforts. In total, the NAIIA pumps approximately $6.4 billion dollars into AI activities under the Initiative. This funding is earmarked in a variety of ways. For example, the National Institute of Standards and Technology (NIST) received Congress’ authorization to spend almost $400 million over 5 years to support development of frameworks for research and development best practices and voluntary standards for AI trustworthiness, including:

  • Privacy and security (including for data sets used to train or test AI systems, and software and hardware used in AI systems);
  • Computer chips and hardware designed for AI systems;
  • Data management and techniques to increase usability of data;
  • Development of technical standards and guidelines to test for bias in AI training data and applications;
  • Safety and robustness of AI to withstand unexpected inputs and adversarial attacks;
  • Auditing mechanisms;
  • Applications of machine learning and AI to improve science and engineering; and
  • Model and system documentation.

NIST will also work on the creation of a risk management framework, standardized data sets for AI training, partnership with research institutes to test AI measurement standards and develop data sharing best practices.

The National Science Foundation will receive almost $4.8 billion over 5 years to fund research and education in AI systems and related fields (including K-12, undergraduate and graduate programs), to develop and deploy trustworthy AI, workforce training and development of a diverse AI workforce pipeline. The National Oceanic and Atmospheric Administration will receive $10,000,000 in 2021 towards its AI Center. Subject to the availability of funding, the Director of the National Sciences Foundation will establish a program to award financial assistance for the planning and establishment of a network of AI Institutes for research and development and attainment of related goals as set forth under the NAIIA. These AI Institutes would be eligible to receive funding in order to manage and make available data sets for training and testing AI, develop AI testbeds, conduct specific research and education activities, provide or broker access to computing resources and technical assistance, and conduct other collaborative outreach activities.

There have been tremendous advancements in the development of AI in recent years, exponentially greater than experienced in the early days of its development in the last century. AI is no longer a matter of science fiction and it is quickly becoming a mainstream reality with a major impact on every aspect of our lives. Through passage of the NAIIA, the U.S. has demonstrated its commitment to responsibly investing in the future of AI, including preparing the public, industry and the future workforce for the new world that has arrived.

To learn more about the legal risks of and solutions to bias in AI, please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). To register, please click here.