Turns out, ignorance really is bliss, at least according to the Office of Civil Rights (“OCR”) within the Department of Health and Human Services (“HHS”), in publishing its final rule on algorithmic discrimination by payers and providers. Our concern is that the final rule,  based on section 1557 of the Affordable Care Act, creates a double standard where more sophisticated organizations are held to a higher level of compliance.  Set to become effective 300 days after publication, health care providers and payers will have a lot of work to do in that time.

In this post, we will lay out the new regulation in its entirety: it’s short.  Then we will add our own perspective on key elements, including important changes from the proposed rule.

The Regulation

Set to be published in the Federal Register on May 6, 2024, but shared for public inspection with the Office of the Federal Register on April 26, 2024, the page numbers we cite are from the submitted PDF. Here is the newly added rule from Title 45 of the Code of Federal Regulations:

§ 92.210 Nondiscrimination in the use of patient care decision support tools.

  1. General prohibition. A covered entity must not discriminate on the basis of race, color, national origin, sex, age, or disability in its health programs or activities through the use of patient care decision support tools.
  2. Identification of risk. A covered entity has an ongoing duty to make reasonable efforts to identify uses of patient care decision support tools in its health programs or activities that employ input variables or factors that measure race, color, national origin, sex, age, or disability.
  3. Mitigation of risk. For each patient care decision support tool identified in paragraph (b) of this section, a covered entity must make reasonable efforts to mitigate the risk of discrimination resulting from the tool’s use in its health programs or activities.

The key definition of that section, “patient care decision support tools,” is found in sec. 92.4, defined to mean “any automated or non-automated tool, mechanism, method, technology, or combination thereof used by a covered entity to support clinical decision-making in its health programs or activities.”

The final rule looks, in some ways, deceptively simple.  But let’s unpack it.

Scope

When looking at the scope of the final rule, one of the first things that hits you is that OCR is talking about both automated and non-automated tools.  We think that’s mostly just clever lawyering designed to show you that the final rule is not uniquely applicable to automated systems.  But it’s largely symbolic because non-automated systems of patient support have largely gone the way of the dodo bird.  That said, OCR calls out a specific example of a nonautomated system being “a Crisis Standards of Care flowchart for triage guidance.” (p. 372).  Apparently, such flow charts were used during COVID.  As OCR further explains, “Other examples of patient care decision support tools include but are not limited to: flowcharts; formulas; equations; calculators; algorithms; utilization management applications; software as medical devices (SaMDs); software in medical devices (SiMDs); screening, risk assessment, and eligibility tools; and diagnostic and treatment guidance tools” (p. 372-73).  That's a pretty comprehensive list.

Going the opposite direction from manual tools, OCR clarifies that the final rule even applies to autonomous devices that don’t simply advise a human professional (p. 377). That's mostly just forward-looking, but it comes into play in certain automated medical devices that actually treat or diagnose without human control.  For example, there is an algorithm that reads retinal images to screen for Diabetic Retinopathy. If the algorithm finds a positive, while treatment does not automatically result, the patient is automatically passed on to the more definitive diagnostic test.

In the preamble to the final rule, OCR makes it clear that the linchpin of this definition is the element of patient support.  OCR is wanting to regulate everything that potentially could impact patient care.  OCR clarifies in the preamble to the final rule that:

The definition of “patient care decision support tool” reaffirms that § 92.210 applies to tools used in clinical decision-making that affect the care that patients receive. This includes tools, described in the Proposed Rule, used by covered entities such as hospitals, providers, and payers (health insurance issuers) in their health programs and activities for “screening, risk prediction, diagnosis, prognosis, clinical decision-making, treatment planning, health care operations, and allocation of resources” as applied to the patient. 87 FR 47880. We clarify that tools used for these activities include tools used in covered entities’ health programs and activities to assess health status, recommend care, provide disease management guidance, determine eligibility and conduct utilization review related to patient care that is directed by a provider, among other things, all of which impact clinical decision-making. (Page 72)

OCR further clarifies that these tools need not just operate at the individual patient level, but also can include tools used for population health.  To show coordination with other agencies, OCR also observes that “One subset of patient care decision support tools to which § 92.210 applies includes “predictive decision support interventions” as defined in the Office of the National Coordinator for Health Information Technology’s (ONC)….” (p. 371).

What is not a patient decision support tool?  According to OCR:

Section 92.210 does not apply to tools used to support decision-making unrelated to clinical decision-making affecting patient care or that are outside of a covered entity’s health programs or activities. For example, § 92.210 does not apply to the following activities when such activities are unrelated to clinical decision-making affecting patient care: automated or non-automated tools that covered entities use for administrative and billing-related activities; automated medical coding; fraud, waste and abuse; patient scheduling; facilities management; inventory and materials management; supply chain management; financial market investment management; or employment and staffing-related activities.  (p.373).

Regarding what organizations must comply, the final rule applies to “covered entities” as that term is defined in 45 C.F.R. § 160.103 and that is well understood to include both payers and health care providers. (p. 370).  An issue did come up regarding whether small entities should be exempted.  OCR responded by explaining “Section 92.210 applies to all covered entities regardless of size, including smaller entities. All covered entities must make reasonable efforts to mitigate the risk of discrimination resulting from their use of a patient care decision support tool identified in § 92.210(b), but the size and resources of the covered entity will factor into the reasonableness of their mitigation efforts and their compliance with § 92.210.” (p. 390-91) We have more to say below about the dichotomy between large and small enterprises.

What It Means to Identify Risk

Perhaps one of the most common issues that arose during the commenting process for the final rule is why hospitals and other purchasers of algorithms cannot rely on the vendors.  Many commenters argued that this issue should be addressed by the software developers and potentially under the oversight of the U.S. Food and Drug Administration.  Commenters asked for clarity regarding the extent to which they could rely on the software developers.  Addressing these questions is the purpose of Section 92.210(b), which requires a covered entity to make reasonable efforts to identify patient care decision support tools used in its health programs and activities that employ input variables or factors that measure race, color, national origin, sex, age, or disability.

In this regard, we should note that OCR is squarely focused on the issue of proxies.  A proxy is a variable which in and of itself does not explicitly relate to a protected category, but it is correlated to a protected category.  For example, given segregation, it is well known that ZIP Code is correlated to race.  

In some areas, such as age and sex, many clinical decisions are quite appropriately and explicitly based on age and sex.  But when it comes to racial status and other protected categories, typically an algorithm is not based explicitly on those categories.  However, OCR “noted that use of clinical algorithms may result in discriminatory outcomes when variables are used as a proxy for a protected basis, and that discrimination may result from correlations between a variable and a protected basis.” (p. 380).  OCR wants providers and payers to be sensitive to that risk, and to lookout for algorithms that may unfairly discriminate based on these proxies.  OCR warns that “covered entities should exercise caution when using patient care decision support tools that are known to use indirect measures for race, color, national origin, sex, age, or disability, which could result in prohibited discrimination.”  (p. 380).

OCR does not expect providers and payers to examine the actual data set used by developers to train their algorithms.  That’s very kind of OCR to acknowledge, but here’s the rub: According to OCR, “[I]f a covered entity does not know whether a developer’s patient care decision support tool uses variables or factors that measure race, color, national origin, sex, age, or disability but has reason to believe such variables or factors are being used, or the covered entity otherwise knows or should know that the tool could result in discrimination, the covered entity should consult publicly available sources or request this information from the developer.”  (emphasis added, p. 380-81). So, apparently, that’s the trigger. 

The key phrases in OCR’s comments are “reason to believe,” “the variables being used,” or “knows or should know that the tool could result in discrimination.” Those trigger the duty of further inquiry.  That is incredibly broad.  As we will discuss later, apparently that’s also a flexible standard (putting it charitably) where institutions that are more sophisticated will be expected to meet a higher standard. 

Health care providers, in particular, will be exposed to more information along these lines once ONC’s final rule on transparency in algorithm development has taken full effect. [89 FR 1192] Beyond that, OCR makes note in the final rule of many different ways that a health care provider or payer might become aware of or should become aware of potential discrimination in the algorithms they use.  Those include:

  • Reading federal rulemakings such as the proposed rule at issue here. (p. 381)
  • Bulletins and advisories that HHS, including the Agency for Healthcare Research and Quality (AHRQ) and FDA, publishes (p. 382)
  • Published medical journal articles (p. 383)
  • Popular media (p. 384)
  • Health care professional and hospital associations (p. 384)
  • Health insurance-related associations (p. 384)
  • Various nonprofit organizations in the field of AI (p. 384)

OCR also specifically elaborates on the meaning of “reasonable efforts to identify” requirement.   OCR may consider, among other factors:

  1. the covered entity’s size and resources (e.g., a large hospital with an IT department and a health equity officer would likely be expected to make greater efforts to identify tools than a smaller provider without such resources);
  2. whether the covered entity used the tool in the manner or under the conditions intended by the developer and approved by regulators, if applicable, or whether the covered entity has adapted or customized the tool;
  3. whether the covered entity received product information from the developer of the tool regarding the potential for discrimination or identified that the tool’s input variables include race, color, national origin, sex, age, or disability; and
  4. whether the covered entity has a methodology or process in place for evaluating the patient care decision support tools it adopts or uses, which may include seeking information from the developer, reviewing relevant medical journals and literature, obtaining information from membership in relevant medical associations, or analyzing comments or complaints received about patient care decision support tools. (p. 384)

Those criteria make it perfectly clear that OCR will expect more of larger organizations, and part of what they expect is these organizations to have in place specialized compliance programs for the creation and use of AI tools in their organizations. Such dedicated compliance programs will mitigate and manage enterprise risks associated with such tools and will demonstrate to third parties, if needed at a later time, how the health care organization was a responsible corporate citizen when dealing with these AI tools.

In summary, one of the basic points OCR is trying to make is that “covered entities must exercise due diligence when acquiring and using such tools to ensure compliance with § 92.210.”  (p. 381).

Mitigations Expected

Some commenters suggested that the final rule includes specific required mitigations such as requiring covered entities to:

develop and implement policies specific to covered entities’ use of clinical algorithms; require staff training; use clinical algorithms in accordance with FDA clearance and developer’s intended uses; use peer-reviewed research to inform adjustments to clinical algorithms; notify patients of suspect clinical algorithms; request an assessment of discriminatory inputs from developers; neutralize any discriminatory inputs by using the predominant cohort in the tool’s training data; and submit annual reports to OCR regarding their use of clinical algorithms and mitigation efforts. (p. 385). 

While agreeing wholeheartedly regarding the need for mitigations, OCR declined to get that specific. Further, frankly, in one of the few practical nods in the final rule, OCR observed that “it is not always possible to completely eliminate the risk of discriminatory bias in patient care decision support tools, and these tools also serve important health care functions.”  (p.385).  Using several examples, OCR observed that in some cases problematic algorithms should be abandoned, while in other cases such algorithms could be simply managed more carefully.

As a general matter, OCR expressed strong support in the final rule for “the National Institutes of Standards and Technology’s (NIST) Artificial Intelligence Risk Management Framework, which explains that AI bias mitigation helps minimize potential negative impacts of AI systems while providing opportunities to maximize positive impacts, without articulating express mitigation measures.”  (p. 386).  OCR opted for the more flexible open-ended approach because of their observation that one size does not fit all, and the risk profile of different algorithms varies tremendously.

Further, as already observed, OCR really likes compliance programs. To that end, OCR explained:

In the Proposed Rule, 87 FR 47883, we noted that covered entities may choose to mitigate discrimination by establishing written policies and procedures governing how clinical algorithms will be used in decision-making, including adopting governance measures; monitoring any potential impacts and developing ways to address complaints; and training staff on the proper use of such systems in decision-making. We encourage covered entities to take these and other additional mitigating efforts to comply with § 92.210.361.  (p. 386).

As already explained above, it also depends on which protected class is at issue.  OCR states, “a covered entity’s mitigation efforts under § 92.210(c) may vary based on the input variable or factor, as well as the purpose of the tool in question. OCR acknowledges that some input variables may generate greater scrutiny, such as race, which is highly suspect, as compared to other variables, such as age, which is more likely to have a clinically and evidence-based purpose.”  (p. 387).

Conclusion

OCR concludes its commentary in the final rule on an ominous note.  Specifically, the notice states, “OCR seeks comment on whether we should engage in additional rulemaking to expand the scope of § 92.210, and if so, in what ways.” (p. 391).  Apparently, OCR is interested in extending the rule to even those algorithms that do not directly impact patient care.  For example, this might address purely economic issues like determining the cost of a procedure and other administrative uses of algorithms.

Regarding the existing final rule, covered entities have 300 days after publication on May 6, 2024 to comply.  If we have done our math right, that is March 2, 2025.  To us, this means that especially the larger health care providers and payers will want to put in place compliance programs specific to this area that among other things institutionalize a process to scan public sources for possible discrimination in the use of these decision tools, and then require the institution to appropriately mitigate any risks that it finds. This sort of proactive information gathering apparently would go a long way to convincing federal regulators the covered entity was duly diligent.

That said, in our humble opinion, this final rule and, in particular, the comments in the preamble, seem to create an unlevel playing field where larger, more sophisticated organizations are held to a greatly higher standard.

The author would like to thank his partner Lynn Shapiro Snyder for her suggestions on this post.

Back to Health Law Advisor Blog

Search This Blog

Blog Editors

Authors

Related Services

Topics

Archives

Jump to Page

Subscribe

Sign up to receive an email notification when new Health Law Advisor posts are published:

Privacy Preference Center

When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer.

Strictly Necessary Cookies

These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information.

Performance Cookies

These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance.