Taming AI in Defence: Responsibility and Control in Military Applications

by Jurriaan van Diggelen

The theme "AI in authority" is especially urgent in safety-critical domains such as defence. In this talk, I begin by examining the moral responsibility of AI researchers in engaging with military applications, including the "Oppenheimer dilemma" - the tension between scientific progress and its potential for harm. I then explore the ethical, legal, and societal implications of AI in kinetic warfare, such as autonomous weapons and surveillance drones, emphasizing the need for meaningful human control to preserve accountability and ethical decision- making. Shifting to cognitive warfare, I address the ELSA dimensions of AI-enabled influence operations, where the battlefield is the human mind itself. In this domain, a hostile army of trollbots could threaten to erode autonomy, trust, and democratic integrity. By tracing these threads, I aim to highlight the risks and responsibilities involved in delegating authority to AI systems in contexts where human lives, freedoms, and core values are directly at stake. About the speaker: Jurriaan van Diggelen is a senior research scientist at the department Human-Machine Teaming at TNO in Soesterberg. He has a PhD in Artificial Intelligence, and currently leads the defense research program Human-machine teaming. He is principal investigator of the ELSA lab consortium which aims to assure Ethical, Legal, and Societal aspects of military Artificial Intelligence. He is chair of the NATO group on meaningful human control of AI-based systems.