ABSTRACT

I argue that there is a gap between so-called “ethical reasoners” and “ethical-decision makers” that can’t be bridged by simply giving an ethical reasoner decision-making abilities. Ethical reasoning qua reasoning is distinguished from other sorts of reasoning mainly by being incredibly difficult, because it involves such thorny problems such as analogical reasoning, and deciding the applicability of imprecise precepts and resolving conflicts among them. The ability to do ethical-decision making, however, requires knowing what an ethical conflict is, i.e., a clash between self-interest and what ethics prescribes. I construct a fanciful scenario in which a program could find itself in what seems like such a conflict, but argue that in any such situation the program’s “predicament” would not count as a real ethical conflict. Hence, for now it is unclear how even resolving all of the difficult problems surrounding ethical reasoning would yield a theory of “machine ethics.”