The comprehensive, even sweeping, set of guidelines[1] for artificial intelligence that the White House unveiled in an executive order on Oct. 30, 2023, show that the U.S. government is attempting to address the risks posed by AI.
Technology is typically evaluated for performance, cost and quality[6], but often not equity, fairness and transparency. In response, researchers and practitioners of responsible AI have been advocating for:
Another important initiative outlined in the executive order is probing for vulnerabilities of very large-scale general-purpose AI models[19] trained on massive amounts of data, such as the models that power OpenAI’s ChatGPT or DALL-E. The order requires companies that build large AI systems with the potential to affect national security, public health or the economy to perform red teaming[20] and report the results to the government. Red teaming is using manual or automated methods to attempt to force an AI model to produce harmful output[21] – for example, make offensive or dangerous statements like advice on how to sell drugs.
Similarly, the public is at risk of being fooled by AI-generated content. To address this, the executive order directs the Department of Commerce to develop guidance for labeling AI-generated content[23]. Federal agencies will be required to use AI watermarking[24] – technology that marks content as AI-generated to reduce fraud and misinformation – though it’s not required for the private sector.
The U.S. government takes steps to address the risks posed by AI.
What the executive order doesn’t do
A key challenge for AI regulation is the absence of comprehensive federal data protection and privacy legislation. The executive order only calls on Congress to adopt privacy legislation, but it does not provide a legislative framework. It remains to be seen how the courts will interpret the executive order’s directives in light of existing consumer privacy and data rights statutes.
Without strong data privacy laws in the U.S. as other countries have, the executive order could have minimal effect on getting AI companies to boost data privacy. In general, it’s difficult to measure the impact that decision-making AI systems have on data privacy and freedoms[27].
It’s also worth noting that algorithmic transparency is not a panacea. For example, the European Union’s General Data Protection Regulation legislation mandates “meaningful information about the logic involved[28]” in automated decisions. This suggests a right to an explanation of the criteria that algorithms use in their decision-making. The mandate treats the process of algorithmic decision-making as something akin to a recipe book, meaning it assumes that if people understand how algorithmic decision-making works, they can understand how the system affects them[29]. But knowing how an AI system works doesn’t necessarily tell you why it made a particular decision[30].
With algorithmic decision-making becoming pervasive, the White House executive order and the international summit on AI safety[31] highlight that lawmakers are beginning to understand the importance of AI regulation, even if comprehensive legislation is lacking.