Comments on NIST Proposal for Identifying & Managing Bias in AI

Executive Summary

As awareness grows of the biases baked into artificial intelligence and other automated decision making, and the real world impacts of these systems on people, federal agencies like the National Institute of Standards and Technology are developing risk management frameworks for these systems. NHeLP submitted comments on NIST’s recent Proposal for Identifying and Managing Bias in Artificial Intelligence. In particular, NHeLP raised the issue that many sytems used for federally funded programs are required to meet due process protections, and this was not mentioned in the proposal. The comments also raised several other issues around disability, language, the purpose and function of the AI system, and the real world impact. The comments incorporated by reference NHeLP’s comments to AHRQ on algorithmic bias in health care for additional examples of bias in automated decision making systems and NHeLP’s recommendations for protections around such systems.

Related Content