ABSTRACT

There is only modest evidence that the application of lethality by autonomous systems is currently considered dierently from a research and development standpoint than any other weaponry. is is typied by informal commentary where some individuals state that a human will always be in the loop regarding the application of lethal force to an identied target. Oen the use of lethality in this context is considered more from a safety perspective [DOD 07a], rather than a moral one. But if a human being in the loop is the ashpoint of this debate, the real question is then, at what level is the human in the loop? Will it be conrmation prior to the deployment of lethal force for each and every target engagement? Will it be at a high-level mission specication, such as “Take that position using whatever force is necessary”? Several military robotic automation systems already operate at the level where the human is in charge and responsible for the deployment of lethal force, but not in a directly supervisory manner. Examples include the Phalanx system for Aegis-class cruisers in the Navy “capable of autonomously performing its own search, detect, evaluation, track, engage and kill assessment

functions” [U.S. Navy 08] (Figure 2.1), the MK-60 encapsulated torpedo (CAPTOR) sea mine system-one of the Navy’s primary antisubmarine weapons capable of autonomously ring a torpedo, cruise missiles (Figure 2.2), Patriot antiaircra missile batteries, “re and forget” missile systems generally, or even (and generally considered as unethical due to their indiscriminate use of lethal force*) antipersonnel mines or alternatively other more discriminating classes of mines (e.g., antitank). ese devices can even be considered to be robotic by some denitions, as they all are capable of sensing their environment and actuating, in these cases through the application of lethal force.