Preventing Harm from Automated Decision-Making Systems in Medicaid

Preventing Harm from Automated Decision-Making Systems in Medicaid

Algorithms and other automated decision-making systems (ADS) are omnipresent features of today’s Medicaid systems. While there are some advantages to automated decisions, the idea that such decisions are free from bias or error is false. Humans are behind the design, programming, and ultimate decisions on what to do with the results. Bias or error can creep in during any and all of these stages. This is particularly true in health care where institutional biases—including race, gender, age, disability, income—are endemic. NHeLP recently highlighted some of these bias issues in comments to AHRQ. From our decades-long experiences working in this area, we know that ADS are rarely, if ever, going to be perfect. Therefore, the environments around them must integrate laws and policies that prioritize transparency, understandability, and exceptions processes that can reject the automated result.

This blog is the first in a series of posts on ADS. This first post will provide an overview of ADS and NHeLP’s experiences working with them.  Subsequent posts will discuss issues in ADS specific to LGBTQ individuals and gender markers, maternal health, age and disability, eligibility and enrollment.

Automated decision-making systems (ADS) are behind many Medicaid coverage denials and reductions.  These systems often cannot fully explain to an individual the reasons behind the decision that the ADS generated. This failure to explain likely not only violates individuals’ legal rights, but may also be cloaking problems with the ADS as it may be using data, analyses, and assumptions that incorporate and sometimes compound bias on race, gender, disability, age, and other factors.  For decades NHeLP has advocated that State Medicaid agency computer systems, utilization review standards, and other ADS be accessible, not discriminate, and provide appropriate notice and complaint processes. What we have learned through this work is that the ADS will never be perfect, so the safety net of individual protections around these decisions must be robust. We have also learned that fixing a system after it has been implemented is far harder than including the necessary parameters in the project from the outset. Too many ADS are created, designed, and tested behind closed doors, only to emerge and dramatically change to individuals’ coverage, creating significant confusion and harm. Much of which could be prevented through greater transparency and appropriate due process protections.

Problems with ADS in Medicaid have shown up in a variety of ways in our litigation, including:

Such denials are confusing to those who need the services, and they often fail to meet the notice requirements of the Medicaid Act and the U.S. Constitution.

Transparency of ADS Throughout the Lifecycle of Decision-making

Much of NHeLP’s experience with ADS has centered upon the failure to provide even minimal information to the impacted individual as to what health care will or will not be provided to them, and why. Commonly we encounter ADS that states, managed care organizations, and other parties try to protect, citing trade secret or other purported intellectual property protections. Although these protections can sometimes be overcome, the difficulty for individuals persists.

Even with the information in hand, understanding the complex design and coding of the ADS can be difficult. This complexity and level of technological knowledge requirement often means it is hard to identify potential coding errors and sources of bias. Health care has a long history of institutional bias that includes explicit and implicit bias and has centered the white, heteronormative experience. Such bias can only be examined if the ADS is transparent. For example, the well-cited 2019 study that identified racial bias in a major health care decisions algorithm was possible because of the availability of the algorithm’s underlying data methodology. Significant information about an ADS must be available for public examination because even a facially “neutral” system in terms of the factors it uses may incorporate biases.

In addition to  errors in the systems, there are near universal issues with programming notices of action so that they include the information that the law requires when a person is being told “no” by Medicaid; namely the decision produced by the ADS, the rationale for it, and how to contest it. This may be because an early design decision did not include the appropriate fields in a notice. But sometimes the problem is far more central to the whole system—the requirement that the ADS be able to put that clear explanation of action into a notice about how and why the decision was made, was never prioritized throughout the lifecycle. And, it is often incredibly difficult to fix at a later date. While some developers of ADS tools say they should just be trusted, that is not what the law  requires. Medicaid coverage is protected by both statutory and constitutional laws. That is because this coverage is so central to a person’s life and well-being. The coverage decision must thus be effectively explained to the affected individual so they can appeal and fight for the services they need in cases of wrongful denial. The systemic solution is not simply better notices after a problem has been identified; this is too late. Instead, there must be greater transparency throughout the system’s development and implementation so that the ADS is built to meet the needs and so that problems can be identified before they are fully baked into the system.

To protect against errors and bias, there must be deliberate decision making around how an ADS is used. This includes identifying any population limitations or potential sources of bias, as well as the procedures and qualifications of those using and interpreting such systems. Some ADS, should be explicitly treated only as guidance with appropriate instruction to those using the results. Although it may be more difficult to put sufficiently protective processes around some systems than others, it should be possible to put such processes into place for most systems unless that system is designed such that it cannot explain an outcome.

Although ADS Perfection is Illusory, Systems Can Readily Include Built-in Protections

ADS will continue to be part of the health care system, and thus Medicaid. But such systems will never perfectly predict the needs or outcomes of everyone. They are largely built on statistical analyses that average, find correlations, or otherwise group people together. There is also the ever-present human element, whether it is a person inputting data, asking questions, or even placing a monitoring device. And then there is the potential for programming errors, no matter how robust the pre-launch testing. In designing ADS, we must operate from the assumption that there will be outliers and errors, so the processes around ADS decisions must be protective, robust, and easy for an affected individual to use.

In summary, an ADS process should ensure the person understands:

  • how information they are providing to an ADS will be used;
  • the decision made by the ADS,
  • an explanation of the decision to a degree of specificity that allows them to make an evaluation of whether or not the decision was based on accurate information and, if they are going to contest the decision, to mount an effective case;
  • the exceptions process that is available; and
  • how to appeal the decision.

Ultimately, transparency throughout the ADS lifecycle will move decisions currently made in an opaque black box to the light. Thus, helping correct for possible errors and sources of bias in ADS, and ensuring the protection of a person’s rights when a decision is made about their health care.


Additional Resources

Race-Based Prediction in Pregnancy Algorithm Is Damaging to Maternal Health

Preventing Harm from Automated Decision-Making Systems in MedicaidCommon

NHeLP AHRQ Comments

Demanding Ascertainable Standards: Medicaid as a Case Study

Q&A: Using Assessment Tools to Decide Medicaid Coverage

Ensuring that Assessment Tools are Available to Enrollees

Medicaid Assessments for Long-Term Supports & Services (LTSS)

Evaluating Functional Assessments for Older Adults

Opportunities for Public Comment on HCBS Assessment Tools – National Health Law Program

A Promise Unfulfilled: Automated Medicaid Eligibility Decisions

Cases

A.M.C v. Smith

Darjee v. Betlach

Hawkins v. Cohen

LS v. Delia

Related Content

  • There are no posts at this time. Please check back later.