ABSTRACT

This chapter comments on the question of ‘meaningful human control’ of autonomous weapons. Meaningful human control can be exhibited at five points: namely, Article 36 review, the policy loop, activation, the firing loop and deactivation. Article 36 review involves lawyers determining whether autonomous weapons can be used in compliance with International Humanitarian Law (IHL). The policy loop is the process by which policy rules are decided. Even if a machine can initiate its own policy rules, humans should still review and approve such rules. This implies that such rules should be in a form that is inspectable by humans. Activation is the decision by a human to turn on an autonomous weapon, knowing what it is programmed to do. The firing loop involves the critical functions of select and engage. Deactivation is the decision by a human to turn off an autonomous weapon (or order it back to base). Some sizing metrics for ‘moral competence’ are suggested. The chapter concludes by looking forward to morally competent security robots in production and to future research that will expand the scope of the test-driven method of machine ethics to include moral problems beyond the ‘foundational’ concern of human security.