Part 3 - AI regulation in the US
This is Part 3 of 'Regulation of AI
Back in October 2022, the White House published federal guidance – a Blueprint for an AI Bill of Rights identifying five principles aiming to guide the design, use, and deployment of automated systems. It was designed to operate as a roadmap to protect the public from AI harms and was followed in October 2023 by the US President's Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The Executive Order sets out eight "guiding principles and priorities", detailing how those principles and priorities should be put into effect, and reporting requirements.
Complementing the US government initiatives, Federal agencies such as the National Institute of Standards and Technology (NIST) and Homeland Security have been tasked with issuing standards and guidance and existing regulatory authorities, to monitor and control the use of AI in ways that will impact AI providers and users.
In the early part of last year, NIST released voluntary guidance in the form of the AI Risk Management Framework (AI RMF 1.0), for organisations designing, developing, deploying, or using AI systems. The framework outlines the characteristics of trustworthy AI systems and how to balance these within the context of the use of AI. It also provides for guidance on how to govern, map, measure and manage risk throughout the AI lifecycle. There are obligations to maintain records covering procedural aspects of the AI system, to train individuals who will be responsible for adhering to policies and procedures and to monitor the functionality and behaviour of systems.
NIST has also published a companion AI RMF Playbook as well as several tools (cross walks) mapping the AI RMF to other AI standards such as (a) Crosswalk AI RMF (1.0) and ISO/IEC FDIS23894 Information technology - Artificial intelligence - Guidance on risk management and (b) An illustration of how NIST AI RMF trustworthiness characteristics relate to the OECD Recommendation on AI, the EU AI Act, Executive Order 13960, and Blueprint for an AI Bill of Rights, in order to help users implement the framework.
In July 2024, NIST released NIST-AI-600-1, Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile, a companion resource to AI RMF 1.0 and three other publications in support of President Biden's Executive Order: Secure Software, AI Standards, and an Initial Public Draft of Managing Misuse Risk for Dual-Use Foundation Models. By the end of this year, it is tasked to publish guidelines on the efficacy of differential privacy guarantee protections and guidance for synthetic content authentication.
Prior to the President's Executive Order, in July 2023 Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI agreed voluntary commitments to move toward safe, secure, and transparent development of AI technology (Apple joined them in July 2024). These tech companies will not only be facing different regimes globally but within the US they will be dealing with legislation that differs between states. California has been expected to lead the way with an AI framework. However, as of September 2024, a Bill known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB-1047) remains in Californian limbo having been vetoed by the State's Governor. In contrast, in May 2024, Colorado passed a consumer protection focused Bill: "Concerning Consumer Protections in Interactions with AI". Some online platforms have expressed concern at the lack of policy at a federal level versus the hundreds of AI bills introduced at state level providing a patchwork effect.
Discover more insights on the AI guide
Stay connected and subscribe to our latest insights and views
Subscribe Here