The implementation of moral decision making abilities in artificial intelligence (AI) is a natural and necessary extension to the social mechanisms of autonomous software agents and robots. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building of systems that aim at goals or standards which may or may not be specified in explicitly theoretical terms. In this paper we wish to provide some direction for continued research by outlining the value and limitations inherent in each of these approaches.