Speakers
Synopsis
Following the 9/11 attack on the Twin Towers, global counterterrorism has shifted from a focus on punishment of past terror acts to a focus on prevention and prediction of future terrorist activity. This preventive, identity-based, approach to counterterrorism is enabled and advanced through a growing reliance on AI-powered predictive algorithms, human and signal intelligence, big data analytics, and visual and matching technologies.
This presentation sheds light on a deepening reliance on AI-powered surveillance tools in several counterterrorism contexts, including border security, law enforcement, and armed conflict. First, it explores how AI-powered legal tools redefine core legal concepts, such as ‘protection’ and ‘security’, ‘distinction’ and ‘proportionality’. Second, it identifies several blind spots in AI-powered counterterrorism, including time and space limits, evidentiary uncertainties, and de-skilling of human decision-makers and human judgment. Third, the presentation exemplifies some of these problems using case studies from the current wars in Gaza and Ukraine. Based on these theoretical and empirical insights, the presentation demonstrates that despite their promises for counterterrorism decision-making, AI-powered legal tools can also lead to operational errors and jeopardize safety and security. To improve counterterrorism decision-making and risk assessment processes, the presentation will propose methods to mitigate these blind spots. For example, through developing context-specific trainings designed to improve human-machine interactions and enhancing decision-makers’ understanding of the technical, human-technical, and legal-technical limitations that harm legal risk assessment processes.
This presentation builds upon a recent briefing I delivered at the United Nations Security Council last year.