We propose that any robots that collaborate with, look after, or help humans—in short, social robots—must have moral competence. But what does moral competence consist of? We offer a framework for moral competence that attempts to be comprehensive in capturing capacities that make humans morally competent and that therefore represent candidates for a morally competent robot. We posit that human moral competence consists of four broad components: (1) A system of norms and the language and concepts needed to communicate about these norms; (2) moral cognition and affect; (3) moral decision making and action; and (4) moral communication. We sketch what we know and don’t know about these four elements of moral competence in humans and, for each component, ask how we could equip an artificial agent with these capacities.