Artificial Intelligence (“AI”) applications are powerful tools that already have been deployed by companies to improve business performance across the health care, manufacturing, retail, and banking industries, among many others. From largescale AI initiatives to smaller AI vendors, AI tools quickly are becoming a mainstream fixture in many industries and will likely infiltrate many more in the near future.

But are these companies also prepared to defend the use of AI tools should there be compliance issues at a later time? What should companies do before launching AI tools and what should companies do to continue to feel confident about compliance while the AI tools simplify and hopefully improve processes? The improper application of AI tools or the improper operation or outcomes from the AI tools can create new types of enterprise risks. While the use of AI in health care presents many opportunities, the enterprise risks that might arise need to be effectively assessed and managed.

But How?

Traditionally, to manage enterprise risk and develop their compliance programs, health care companies have relied upon the multitude of guidance that has been published by the Office of Inspector General of Health and Human Services (“OIG”) and by industry associations such as Health Care Compliance Association and other federal, state and industry-specific guidance. Specific compliance related guidance focused on the use of AI tools in health care is lacking at this time, however, the National Defense Authorization Act (NDAA), which became law on January 1, 2021, includes the most significant U.S. legislation concerning AI to date, The National Artificial Intelligence Initiative Act of 2020 (NAIIA). The NAIIA mandates establishment of various governance bodies, in particular, the National Artificial Intelligence Advisory Committee, which will advise on matters relating to oversight of AI using regulatory and nonregulatory approaches while balancing innovation and individual rights.

In the absence of specific guidance, companies can look to existing compliance program frameworks, e.g., the seven elements constituting an effective compliance program as identified by OIG, to develop a reliable and defensible compliance infrastructure. While we can lean on this existing framework as a guide, additional consideration needs to be devoted to developing an AI compliance program that is specific and customized to the particular AI solution at hand.

What policies will govern human conduct in the use and monitoring of the AI tool? Who has the authority to launch the use of the AI tool? Who has the authority to recall the AI tool? What would be the back-up service if needed? Written policies and procedures can help.

***

To learn more about the ways in which the existing policies in connection with existing corporate compliance programs can be applied to the use of AI tools,

please join us at Epstein Becker Green’s virtual briefing on Bias in Artificial Intelligence: Legal Risks and Solutions on March 23 from 1:00 – 4:00 p.m. (ET). To register, please click here.

Back to Health Law Advisor Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Health Law Advisor posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.