ABSTRACT

ABSTRACT If, at some point in the future, each artificial intelligence (AI) development project carries some amount of existential risk from fast takeoff, our chances of survival will decay exponentially until the period of risk is ended. In this chapter, I review strategies for ending the risk period. It seems that effective strategies will need to be resilient to government involvement (nationalized projects, regulation, or restriction), will need to account for the additional difficulty of solving some form of the control problem beyond the mere development of AI, and will need to deal with the possibility that many projects will be unable or unwilling to make the investments required to robustly solve the control problem. Strategies to end the risk period could take advantage of the capabilities provided by powerful AI, or of the incentives and abilities governments will have to mitigate fast takeoff risk. Based on these considerations, I find that four classes of strategy-international coordination, sovereign AI, AI-empowered project, or other decisive technological advantagecould plausibly end the period of risk.