ABSTRACT

Research to develop powerful autonomous systems, artificial general intelligence (AGI), and in the future, possibly, superintelligence1 is predominant in various academic and policy-making spheres.2 These technologies could transform mankind, and the planet itself.3 Some believe AI to be the technological development that humanity needs to achieve cures for terminal diseases and end biological and earthly limitations, among other benefits—and perhaps more importantly, to overcome its historical anthropogenic contradictions, such as wars, injustice, and inequality.4 Some even argue this is the last invention we would ever need to produce, as once superintelligence is achieved, it will create incredible technological developments that our biological brains cannot even imagine. Of course, an existential risk could arise if superintelligence is achieved and decides that humans are no longer of use or interest. There is also the strong likelihood that AI will be used for war.5 Facing these probabilities, calls for AI aligned with good human values and to benefit mankind are plentiful.6 Nick Bostrom has recommended AI development under what he calls the common good principle: “[s]uperintelligence should be developed only for the benefit of all humanity and the service of widely shared ethical ideals.”7 Proposals for precisely which ideals or guidelines should regulate AI production are often linked to their potential use as weaponry.