ABSTRACT

The chapters collected in this volume highlight an array of emergent ethical issues in the field of artificial intelligence, largely drawing from existing schema (legal, theoretical, and procedural) for guidance. Here, we identify common threads among the various, independently developed approaches, thereby demonstrating how the social sciences are uniquely situated to provide historical context and actionable precedent. We organize this review through five essential questions: In what ways are humans and AI systems rational? What forms of bias exist in humans and AI systems? How can one handle the indeterminacy of meaning among humans and in AI systems? How can computer agents be compared with human conversation partners? How can humans and AI systems promote beneficence and dignity for humans? We conclude that there are no perfect answers to any of these questions, but rather there are tradeoffs that need to be scrutinized by humans and quantitative computational systems in a hybrid evaluation system.