In an era of artificial intelligence (AI) and other forms of automated decision-making systems making decisions about people’s lives, people are rightly worried about how AI may affect their jobs, use their posts, invade their privacy, and affect their access to sound medical care. AI experts and the public have sounded the alarm that government should be moving to improve accountability and protections in this fast moving field. Instead, the recent Congressional proposal would ban state and local governments from enforcing any law or regulation about AI and similar systems for a full decade. At the same time, the proposal would require the federal government to modernize their systems with commercial systems. The proposed reconciliation bill includes $500,000,000 through 2035 for such purchases. So while the federal government will be funneling money to private, commercial interests for AI systems that will not have to meet significant standards for safety or accountability, state and local governments will be stripped of their ability to protect their citizens from any harm from these systems, or any other AI systems affecting people. Given the 10-year safe harbor being granted by Congress, federal government restrictions seem unlikely to provide the level of protections needed, and would likely take considerable time.
States Lead on AI Regulation & Protection
States have taken the lead on efforts to regulate AI and protect people from harmful systems. State legislation has varied in what is required, but there is growing interest in requiring increased transparency and assurances in the use of AI. Last year, Colorado enacted landmark legislation that establishes consumer protections against discrimination within AI systems. California’s legislature has introduced 23 AI-related bills so far in 2025. Some states have begun to regulate AI use, including requirements for disclosure, while others have incorporated AI protections into consumer protection laws.
States have also helped to enforce new protections. For example, Texas secured a “first-of-its-kind settlement” against an AI healthcare technology company for deceptive claims about the accuracy of its healthcare AI products. The New York Attorney General brought action against health insurance companies in part for the use of an algorithm that allegedly led to denials of needed mental health care. State attorney generals have also issued AI guidance, urged the federal government to take action, and taken steps to limit or prevent discrimination from such systems, including in health care. Governors have issued executive orders and taken other steps to protect people from harm.
The bill would slam the brakes on these kinds of state efforts to protect their people from harmful AI, including simple actions that would require basic assurances, disclosure, and accountability.
Harmful Impact of AI on Low-Income People, Including Medicaid Coverage & Services
In the United States, 92 million low-income people are impacted by artificial intelligence (AI) driven decisions in some area of their daily lives, including 72 million low-income people who are exposed to AI decision-making in Medicaid. These tools are frequently used in Medicaid eligibility and enrollment processes, needs assessments, and prior authorization processes for medically necessary services. The unprecedented speed and scale at which these tools operate and spread tests the limits of existing accountability frameworks.
Make no mistake – these systems already cause real harm. Indeed, AI, including algorithms, are already deployed in public benefit programs with some devastating effects. Often, these tools employ faulty and unreliable data that leads to inappropriate loss of benefits. They add black box eligibility criteria not permitted by law. They produce one-size fits all outcomes, rather than person-centered considerations. This is especially true for people with disabilities.
Regulation of Risk is a Normal and Needed Function of Government
The call for accountability and regulation of AI is a normal function of government. Regulations ensure our cars have functional seat belts that protect us in accidents, our food is safe to eat, and our prescriptions are tested and risks identified. But industry and now the federal government is proposing to give technology free rein to gallop forward into “innovation” with no regard for the people it harms in the process.
AI, especially in health, can cause significant and unfixable harm as denied care can lead to permanent injury or death. Identifying harm from AI systems and attempting to fix the systems after harm has occurred and moving forward is unacceptable. Society and government generally does not allow unfettered experimentation on people, much less those who do not consent to it. It is a particularly amoral proposal when many of those impacted will be low-income individuals who have limited time and financial resources to fight wrongful denials and few or no other options to access needed care. Especially so as these technologies will line the pockets of commercial tech vendors for the next 10 years.
One developer of a Medicaid algorithm said “you’re going to have to trust me that a bunch of smart people determined this is the smart way to do it.” He compared the complexity of an algorithm to a washing machine – a box that automatically cleans your clothes even if you don’t know how it works. It turns out the algorithm in question included errors and caused significant harm to Medicaid long-term care enrollees in that state. We can look at his analogy from a different perspective to shed light on this current proposal. Washing machines are actually regulated for safety, performance, and energy efficiency, among other things. So yes, in fact, AI, algorithms, and other forms of automated decision-making should be more like washing machines in that they are tested and regulated for safety, consumer protection. States need to be able to create and enforce requirements and protect people as needed.
To create a 10-year license to let AI developers run wild with a technology that has already proven to be harmful and need regulation is irresponsible. But also to require the government to update its technology in a way that likely favors profits over protection and privacy is a recipe for disaster. This new provision creates a perfect environment for the proliferation of harmful, wasteful systems, and it tramples on states’ autonomy to prevent them from doing anything about it. Since there would be fewer backstops from state enforcement, the commercial tech industry would be incentivized to provide less than their best in order to maximize profit at the expense of government funding and the people it is supposed to serve.
For more on NHeLP’s work regarding accountability of algorithmic and automated decision making systems, see our Fairness in Automated Decision-Making Systems page.