ABSTRACT

The virtue of a moral agent in Plato is equally a function of knowledge: the agent's knowledge of his own nature (goal, function, good) and that of his social-physical environment. With any increase in sophistication, robots have to be given increasing independence of human supervision and control. But with increased independence and self-control on the part of robots, the danger of our creating Frankensteinian monsters also increases. Intuitionists might still be averse to calling such robots moral and refuse to treat them as moral agents since it might be intuitively clear to them that making moral statements about robots is simply absurd. Whether intuitionists could possibly intuit it good to construct intuitively moral robots is impossible to argue—since moral intuitionism undercuts all argument—and so the only problem that remains is whether to develop and how to program moral robots in Plato.