Automated systems play increasingly important roles in shaping access to health care services. Frequently, insurers use algorithms and artificial intelligence (AI) to route requests and make coverage decisions, fill records and forms, and even make recommendations about medical necessity. While automation can improve speed, relying too heavily on AI systems risks inappropriate denials, biased decision-making, and a lack of individualized clinical review. In the end, computers and algorithms are too often replacing, not complimenting, clinician’s judgments and recommendations about what care you need.
A recent nationwide survey of health insurers found that most are already using automated AI systems for prior authorization (PA) requests. Across individual and group markets, roughly 3 out of every 4 plans report that they use AI for PA approvals, which can help reduce delays. However, a smaller, but notable share (about 8–12%) use AI to support PA denials. These automated denials put patients’ access to care most at risk.
The Administration’s recent Executive Order on AI, “Ensuring a National Policy Framework for Artificial Intelligence,” seeks to limit states’ ability to enact and enforce their own AI safeguards. Earlier this year, Congress rejected legislative proposals to limit state AI regulatory authority. The EO directs the Department of Justice to identify and challenge state laws it views to conflict with the as-yet-undetermined federal AI policy. It also encourages DOJ to target state laws considered “onerous or excessive.” Although not health-policy specific, the EO squarely targets state legislation that regulates AI systems and automated decision-making in health care. While the threat of DOJ entanglement is real pressure, EOs cannot preempt state laws in this way.
In the absence of an enforceable federal AI regulatory framework, states have increasingly filled the gap by passing their own AI legislation. Two common flavors include AI-specific laws covering high-risk uses of AI, and laws that constrain how prior authorization decisions are made. AI-specific state legislation often creates new nondiscrimination protections (one of the targets the EO sets out for DOJ). Others clarify obligations and provide for enforcement of existing laws, like setting transparency obligations and confirming that consumer protections apply to the use of AI in high-risk spaces (often defined to include health care and health insurance). For example, Colorado’s landmark Consumer Protections in Interactions with Artificial Intelligence Systems Act applies to AI used in health care decisions – including utilization decisions.* It guarantees bias protections, requires plans to disclose important data and methodologies, and guarantees an individual’s right to appeal an AI-generated health care decision.
PA-specific legislation often requires clinician review of automated decisions, prohibits fully automated denials, and/or mandates public reporting on approval and denial patterns and processes. Examples include Texas, which passed legislation in 2025 prohibiting utilization review agents from using an automated decision system to issue an adverse determination without human oversight. Arizona and Maryland adopted similar laws prohibiting the use of AI as the sole basis for a medical necessity denial.
The EO threatens to weaken enforcement of these protections and to push states toward less meaningful reforms that are easier for payers to evade. Where states do not defend patients, automation may harm people who already lack protection against opaque, algorithmic systems, denials that are difficult to challenge, and processes that are shielded as proprietary trade secrets. This is not a red state or blue state issue. The administration should let states take action to protect consumers from AI harm.
The Trump administration is also pushing to expand the use of AI in health care. The Centers for Medicare & Medicaid Services (CMS), through its Innovation Center, launched WISeR (Wasteful and Inappropriate Service Reduction), a pilot program being tested in 6 states to apply AI in prior authorization to select items and services in traditional Medicare. HHS hopes to set a precedent with the program, which is set to launch on January 1, and to expand AI to more HHS programs. HHS recently released a revised AI Strategy, which Secretary Kennedy suggested is the “template for the utilization of AI” across the federal government, and indicates HHS’ commitment to being “all in” for AI. This week, HHS also released an information request “seeking broad public input on how HHS can accelerate the adoption and use of artificial intelligence as part of clinical care for all Americans.” Once AI-enabled prior authorization is normalized in these spaces and vendors can point to it as the “new, federally accepted standard,” it will be far more difficult for states to regulate.
Provider groups, including the American Medical Association, have strongly criticized WISeR. Both the House and Senate have introduced companion bills to stop it from moving forward. While it is encouraging that Congress is highlighting the risks of AI in PA, the bill is unlikely to gain traction in this Congress.
As we move into the new year, the clash over AI-driven PA (along with other health care utilization management) will continue to play out at both the federal and state levels. Several developments are worth watching:
- Details about WISeR’s implementation and any emerging patterns in denials or appeals will be critically important as stakeholders assess its impact.
- Whether the SMARTER Care Act gains traction in Congress will signal how seriously lawmakers take concerns about AI in health care.
- States may consider pulling back AI guardrails that apply to health care insurers (generally) and prior authorization due to fear of preemption or legal challenges.
- States that have already made moves to regulate AI and/or PA may be test cases for whether protections can be drafted to survive federal challenges under the EO.
Stay tuned to NHeLP’s Prior Authorization series in 2026 for more updates.
*A special Colorado legislative session pushed back implementation by six months to June 2026.