Health Law Advocates Encouraged by Biden Administration Guidance on AI

Health Law Advocates Encouraged by Biden Administration Guidance on AI

Washington, DC – Today, the Biden Administration and the Office of Management and Budget announced final guidance to federal agencies to adopt “concrete safeguards” to protect the rights of individuals when using artificial intelligence, which is already used in a wide range of applications. The National Health Law Program (NHeLP) supports robust safeguards in government use of AI and other automated decision-making systems.

“The reality is that algorithms, AI, and other automated decision-making systems (ADS) are omnipresent features of Medicaid and health care, and have been for decades,” says Elizabeth Edwards, senior attorney and lead of NHeLP’s work on automated decision-making systems. “ADS of all types has generated significant harm and with rapid advancements in AI technology, we know that algorithms will increasingly make decisions about eligibility and services that profoundly impact whether people get the services they need and for which they are eligible. Enrollees often do not understand that ADS has been used to deny their care, much less how the decision was made. Increased complexity of ADS with lack of transparency and other protections exacerbates this problem. This directive is an important step towards increased transparency, risk management, and protection of critical rights. We are encouraged by this action from the Biden Administration and will continue to advocate for greater protections in the use of all types of ADS in Medicaid and other health care programs.”

Learn more about our ADS work at www.www.healthlaw.org/algorithms and read our Principles for Fairer, More Responsive Automated Decision-Making Systems. Also, check out our comments to OMB regarding the Artificial Intelligence Memorandum.

Related Content