AI technologies are rapidly transforming military operations, cyber activities, law enforcement, border management, and counter-terrorism. Yet governance efforts remain fragmented. Debates on military AI unfold in fora such as the United Nations General Assembly and the Group of Governmental Experts on Lethal Autonomous Weapons Systems, while regulatory initiatives such as the EU AI Act address civilian and dual-use applications. At the same time, private sector actors increasingly shape both technological development and normative frameworks.

This conference moves beyond siloed analysis. It asks what lessons can be drawn across domains where similar legal and ethical challenges recur: algorithmic targeting, discrimination and bias, accountability for cyber operations, predictive policing, biometric surveillance, and the integration of AI into decision-support systems. It explores how international humanitarian law, international human rights law, criminal law, and disarmament regimes interact with domestic regulation, soft law instruments, and ethical governance frameworks. It also situates these debates within contemporary geopolitical dynamics, marked by technological competition, asymmetries of power, and deep public–private entanglement.

Through five thematic panels – AI-enabled targeting, AI and cyber operations, AI in law enforcement and criminal justice, the role of the private sector, and governance and regulation – the conference seeks to foster cross-fertilisation across security contexts. A keynote address by Lorna McGregor will reflect on contemporary AI developments and the need for meaningful engagement with stakeholders throughout the AI lifecycle. By convening diverse expertise, the conference aims to lay the groundwork for a holistic and practically oriented approach to securing AI across security domains.