Can security automata (robots and AIs) make moral decisions to apply force on humans correctly? If they can make such decisions, ought they be used to do so? Will security automata increase or decrease aggregate risk to humans? What regulation is appropriate? Addressing these important issues this book examines the political and technical challenges of the robotic use of force.

The book presents accessible practical examples of the ‘machine ethics’ technology likely to be installed in military and police robots and also in civilian robots with everyday security functions such as childcare. By examining how machines can pass ‘reasonable person’ tests to demonstrate measurable levels of moral competence and display the ability to determine the ‘spirit’ as well as the ‘letter of the law’, the author builds upon existing research to define conditions under which robotic force can and ought to be used to enhance human security.

The scope of the book is thus far broader than ‘shoot to kill’ decisions by autonomous weapons, and should attract readers from the fields of ethics, politics, and legal, military and international affairs. Researchers in artificial intelligence and robotics will also find it useful.

chapter |27 pages


chapter 1|32 pages


chapter 2|4 pages


chapter 3|4 pages


chapter 4|17 pages

Solution design

chapter 5|33 pages

Development – specific norm tests

chapter 6|6 pages

Development – knowledge representation

chapter 7|39 pages

Development – basic physical needs cases

chapter 9|12 pages

Moral variation

chapter 10|5 pages


chapter 11|12 pages