On October 22, 2019, the Centers for Medicare and Medicaid Services (“CMS”) issued a Request for Information (“RFI”) to obtain input on how CMS can utilize Artificial Intelligence (“AI”) and other new technologies to improve its operations.  CMS’ objectives to leverage AI chiefly include identifying and preventing fraud, waste, and abuse.  The RFI specifically states CMS’ aim “to ensure proper claims payment, reduce provider burden, and overall, conduct program integrity activities in a more efficient manner.”  The RFI follows last month’s White House Summit on Artificial Intelligence in Government, where over 175 government leaders and industry experts gathered to discuss how the Federal government can adopt AI “to achieve its mission and improve services to the American people.”

Advances in AI technologies have made the possibility of automated fraud detection at exponentially greater speed and scale a reality. A 2018 study by consulting firm McKinsey & Company estimated that machine learning could help US health insurance companies reduce fraud, waste, and abuse by $20-30 billion.  Indeed, in 2018 alone, improper payments accounted for roughly $31 billion of Medicare’s net costs. CMS is now looking to AI to prevent improper payments, rather than the current “pay and chase” approach to detection.

CMS currently relies on its records system to detect fraud. Currently, humans remain the predominant detectors of fraud in the CMS system. This has resulted in inefficient detection capabilities, and these traditional fraud detection approaches have been decreasingly successful in light of the changing health care landscape.  This problem is particularly prevalent as CMS transitions to value-based payment arrangements.  In a recent blog post, CMS Administrator, Seema Verma, revealed that reliance on humans to detect fraud resulted in reviews of less than one-percent of medical records associated with items and services billed to Medicare.  This lack of scale and speed arguably allows many improper payments to go undetected.

Fortunately, AI manufacturers and developers have been leveraging AI to detect fraud for some time in various industries. For example, the financial and insurance industries already leverage AI to detect fraudulent patterns. However, leveraging AI technology involves more than simply obtaining the technology. Before AI can be used for fraud detection, the time-consuming process of amassing large quantities of high quality, interoperable data must occur. Further, AI algorithms need to be optimized through iterative human quality reviews. Finally, testing the accuracy of the trained AI is crucial before it can be relied upon in a production system.

In the RFI, CMS poses many questions to AI vendors, healthcare providers and suppliers that likely would be addressed by regulation.  Before the Federal government relies on AI to detect fraud, CMS must gain assurances that AI technologies will not return inaccurate or incorrect outputs that could negatively impact providers and patients. One key question raised involves how to assess the effectiveness of AI technology and how to measure and maintain its accuracy. The answer to this question should factor heavily into the risk calculation of CMS using AI in its fraud detection activities. Interestingly, companies seeking to automate revenue cycle management processes using AI have to grapple with the same concerns.  Without adequate compliance mechanisms in place around the development, implementation and use of AI tools for these purposes, companies could be subject to high risk of legal liability under Federal False Claims Act or similar fraud and abuse laws and regulations.

In addition to fraud detection, the RFI is seeking advice as to whether new technology could help CMS identify “potentially problematic affiliations” in terms of business ownership and registration.  Similarly, CMS is interested to gain feedback on whether AI and machine learning could speed up current expensive and time-consuming Medicare claim review processes and Medicare Advantage audits.

It is likely that this RFI is one of many signals that AI will revolutionize how healthcare is covered and paid for moving forward.  We encourage you to weigh in on this on-going debate to help shape this new world.

Comments are due to CMS by November 20, 2019.

Back to Health Law Advisor Blog

Search This Blog

Blog Editors


Related Services



Jump to Page


Sign up to receive an email notification when new Health Law Advisor posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.