ABSTRACT

This chapter defines the book as a contribution to machine ethics, robot ethics and international relations. Machine ethics is the project of making moral decisions in Turing machines. Machine ethics asks: can robot and AIs (automata) make correct moral decisions and, if so, how? Robot ethics asks: ought particular decisions be delegated to machines? In international relations a key robot ethics issue is the debate on autonomous weapons. To what extent, if any, can decisions to target enemy humans be reliably delegated to machines? Even if such decisions can be reliably made by machines, should delegation be permitted or should it be comprehensively and pre-emptively banned? The introduction also describes key differences between traditional human ethics and machine ethics and the aim and scope of the book. The aim is to define moral competence in the security functions of social robots. The scope is a range of test cases involving automata making decisions about human security. Some cases involve autonomous weapons. Others centre on civilian moral problems taken ‘off the shelf’ from the philosophical literature, fiction and everyday life.