ABSTRACT

This chapter questions the appropriateness of using language of certainty when discussing safety considerations of general artificial intelligence. Verification and validation of agentic behavior have been suggested as important research priorities. The chapter assembles in one place some classic and novel arguments, contextualized for agentic behavior, that these methods have fundamental limitations. It begins by establishing a very general formalism to characterize agentic behavior and to describe standards of acceptable behavior. It shows that determination of whether an agent meets any particular standard is not computable. It discusses the extent of the burden associated with verification by manual proof and by automated behavioral governance. It shows that to ensure decidability of the behavioral standard itself, one must further limit the capabilities of the agent. It then demonstrates that if our concerns relate to outcomes in the physical world, attempts at validation are futile. Finally, it shows that layered architectures aimed at making these challenges tractable mistakenly equate intentions with actions or outcomes, thereby failing to provide any guarantees. The chapter concludes with a discussion of why language of certainty should be eradicated from the conversation about the safety of general artificial intelligence.