ABSTRACT

This chapter explores some foundational issues in normative ethics in relation to artificial intelligence security. It is argued that issues arising out of artificial intelligence security provide reasons for favoring consequentialism over deontology. To this end utilitarianism and Kantianism are contrasted, along with two new moral theories: ukantianism and kutilitarianism. Several thought experiments, including the famous “trolley problem,” are used to illustrate these theories. The chapter outlines a solution to conflicting judgments about whether it is morally permissible to sacrifice one person in order to save a greater number of persons. The chapter also seeks to show the relevance of artificial intelligence research for ethicists.