The Double Black Box: National Security, AI, and the Struggle for Democratic Accountability
HTML-код
- Опубликовано: 23 ноя 2024
- One pressing challenge posed by artificial intelligence (AI) is that its use may weaken democratic accountability for national security decisions, including the resort to force. Military and intelligence decisions are highly consequential, but paradoxically can be the most difficult for legislatures and courts to oversee because they are often classified. To ensure that executive actors adhere to the public law values of accountability, rationality, and legality, we also rely on additional, less predictable tools such as leaks, technology companies, and pressure from foreign allies. Even with these additional tools, many decry the “black box” nature of national security decision-making and the polity’s ability to constrain the Executive when it makes poor policy choices or acts unlawfully.
The rise of AI systems to enable national security decision-making - or even make autonomous decisions - will deepen this critique, because it is difficult to understand how AI algorithms reach their conclusions. Some refer to these algorithms as “black boxes,” because programmers and users generally cannot access the algorithms’ internal processes or the basis for their predictions. Military and intelligence agencies in some democracies have already begun to use AI. But how can we be confident that these AI systems comport with our laws and values? Will national security officials retain the power to override algorithmic decisions and, if so, when and how? The widespread use of AI will render national security choices inside democracies even more opaque - not only to the public, but also to those on the receiving end of the government’s national security actions, allies, legislative overseers, and even the officials making the security decisions. This “double black box” raises critical challenges for democratic accountability and for the existing international legal regime that regulates states’ use of force.
The talk will define and explore the “double black box” phenomenon, analyse its costs and benefits, and identifies ways that policymakers, military and intelligence officials, and lawyers in democratic states such as the United States and Australia can reap the advantages of advanced technologies without surrendering their rule of law values.
About the speaker
Ashley Deeks is the Class of 1948 Scholarly Research Professor at the University of Virginia Law School. Her primary research and teaching interests are in international law, national security, intelligence, and the application of new technologies to those fields. She writes about the use of force, executive power, government secrecy, and the intersection of national security and AI, and she is the co-author of a leading casebook on foreign relations law. She is an elected member of the American Law Institute, a member of the State Department’s Advisory Committee on International Law, and a contributing editor to the Lawfare blog. She recently served as Special Assistant to the President, Associate White House Counsel, and Deputy Legal Advisor to the National Security Council. Before joining UVA, she served for ten years in the U.S. State Department’s Office of the Legal Adviser, including as the embassy legal adviser at the U.S. Embassy in Baghdad during Iraq’s constitutional negotiations. Deeks received her J.D. with honors from the University of Chicago Law School, where she was elected to the Order of the Coif and served as an editor on the Law Review. After graduation, she clerked for Judge Edward R. Becker of the U.S. Court of Appeals for the Third Circuit.
About the chair
Toni Erskine is Professor of International Politics in the Coral Bell School of Asia Pacific Affairs at The Australian National University and Associate Fellow of the Leverhulme Centre for the Future of Intelligence at Cambridge University. She is currently Chief Investigator of this two-year research project on ‘Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making’ funded by the Australian Department of Defence. Her research interests include the ethics of war and the impact of artificial intelligence (AI) and human-machine interation on organised violence. Professor Erskine is the recipient of the International Studies Association’s 2024 International Ethics Distinguished Scholar Award.
This Public Lecture Series, ‘AI, Automated Systems, and the Future of War’, is part of the two -year (2023-2025) research project on Anticipating the Future of War: AI, Automated Systems, and Resort-to-Force Decision Making, generously funded by the Australian Department of Defence, and led by Professor Toni Erskine from the Coral Bell School of Asia Pacific Affairs.