ABSTRACT

Human dignity as a universal moral value has not previously been explored as an important indicator of whether AI is ethically designed, developed, and deployed. Drawing on law and moral philosophy, this chapter develops a legal-philosophical conception of human dignity that can form the basis of policy formulation and laws governing AI innovation and impact on societies. Part 1 sets out concerns about AI innovation and its potential adverse impact on human dignity. Part 2 considers how diverse cultures, international legal instruments, and constitutional laws represent human dignity as innate human worthiness that is a universal moral value, a right, and a duty. Part 3 develops two distinct dimensions of human dignity which can be concretized in policy and law relating to AI: (1) recognition of the status of human beings as agents with autonomy and rational capacity to exercise reasoning, judgement, and choice; and (2) respectful treatment of human agents so that their autonomy and rational capacity are not diminished or lost through interaction with or use of the technology.