ABSTRACT

Once artificial intelligence (AI) development has been so successful as to produce a superintelligent machine, that is, one that clearly outperforms humans across the whole range of capabilities that we associate with human intelligence, what will happen? From the perspective of humanity, it seems plausible both that things can go very well, and that they can go very badly. Recent years have witnessed a shift of emphasis in this discourse, away from the sometimes evangelical-sounding focus on the potential for human flourishing that such a breakthrough will entail (Kurzweil, 2005), and toward increased attention to catastrophic risks (Yudkowsky, 2008; Bostrom, 2014).