1557 Final Rule Protects Against Bias in Health Care Algorithms

1557 Final Rule Protects Against Bias in Health Care Algorithms

The 2024 Final Rule implementing § 1557 of the Patient Protection and Affordable Care Act includes an important protection against the use of some algorithms in health care, or as the rule terms it “discriminatory patient care decision support tools.” HHS defined this term to be broad and inclusive, going beyond the proposed term of “clinical algorithms” and including other forms of automated or augmented decision-making tools that range from flowcharts and clinical guidelines to complex computer algorithms, artificial intelligence (AI), and machine learning. This move to broaden the term from the proposed rule is in line with NHeLP’s comments that highlighted that many of the harms from automated decision making systems, such as algorithms, come from an array of tools with varying sophistication.

Algorithms and other forms of automated decision-making systems (ADS), used broadly in Medicaid and health care generally, are a recognized source of bias, discrimination, and wrongful denials of necessary care and benefits. The proposed rule rightfully recognized that covered health programs and activities cannot discriminate through the use of clinical algorithms under § 1557. The finalized provision, § 92.210 prohibits discrimination by the broader category of patient care decision support tools which will protect more people against the different types of tools used in health programs and activities.

The Final Rule also requires that covered entities have ongoing and affirmative duties to identify patient care decision support tools that use inputs measuring a protected characteristic and mitigate the risk of discrimination from each tool’s use. We believe, however, that the affirmative duty to identify tools and mitigate risk of discrimination does not go far enough. It should apply to all tools in use, not just those with discriminatory inputs, as bias can occur in tools regardless of a specific input. This would provide important protections against proxy discrimination, which is particularly crucial as tools are evaluated for discrimination and entities seek to remedy problems by simply removing certain protected class related inputs, but may not resolve the discriminatory impact. The preamble particularly cautions covered entities about using tools that are known to use indirect measures for protected classes. Given the institutional biases in health care, this admonition should be important moving forward as tools evolve. Yet we believe this language should be in the rule itself.

HHS also made other important decisions regarding this provision. It declined comments to create a safe harbor for covered entities that use clinical algorithms within the scope of their intended purpose. To allow a safe harbor would create “finger pointing” when harm from a tool is identified. The Final Rule makes it clear that each entity – from developer to user – is required mitigate the risk of discrimination, as long as they are covered entities under § 1557. However, HHS did note that while the size of the entity will not make it exempt from the requirements, the size and resources will factor into the reasonableness of their mitigation efforts and compliance.

How this section of the Final Rule is implemented will be important because of the undefined terms. While there are some indicators in the preamble about the meaning of “reasonable efforts to mitigate the risk of discrimination,” interpretation of this language will be important to evaluate the effectiveness of this section. Similarly, the level of effort determined reasonable to identify discriminatory tools in practice will make a significant difference in whether tools are diligently identified and evaluated so that risks will be mitigated. In addition, HHS seeks comment on whether the rule should also cover tools used by covered entities that do not directly impact patient care and clinical decision-making, but may still result in unlawful discrimination under § 1557. As noted in the preamble discussing § 92.210, many tools used in health care will not fall under the section’s definition of patient care decision support tools.

The prohibition on discriminatory patient care decision supports tools is a strong step to address harm from these tools and strong enforcement will put further guardrails in place to generate critically needed examination of, and protection against, some types of discriminatory ADS. As the regulatory landscape of algorithms and AI is rapidly evolving, the Final Rule’s broad language is necessary for protection from discrimination to evolve with it.

Related Content