ABSTRACT

Defining privacy is an elusive task (Hunt, 2015: 161–162). 1 Daniel Solove, an authority on privacy law and theory, describes it as a concept in “disarray” (Solove, 2008: 1)—something that cannot “be reduced to a singular essence” (Solove, 2013: 24). Instead, it is a smattering of related concepts best understood “pluralistically rather than as having a unitary common denominator” (Solove, 2008: 9). In the digital era, this fundamental aspect, combined with the rise of information and computer technologies, has created many significant privacy conundrums. As Luciano Floridi notes, such technologies have simultaneously augmented and eroded informational privacy, posing challenges to policy approaches predicated either on regulating activity that results in undesirable consequences (the “consequences” approach) or that violates human rights or welfare (the “rights” approach) (Floridi, 2005: 193–194). The consequences approach has struggled to address the proposition that “a society devoid of any informational privacy may not be a better society,” while the rights approach confronts definitional issues of mixed public–private information and imprecision around foundational concepts like ownership (Floridi, 2005: 194). Such questions reveal that scholars might not be able to agree on what privacy is, even if they tend to know what it is about. Whether it implicates control over the collection, storage, use, or disclosure of information (or the consent to such practices by others) or whether it is about a “right to be let alone” 2 in “free zones” (Solove, 2013: 50) away from others’ scrutiny, interference, intrusion, or access, privacy is power. 3 With increasing calls for human-centered AI—aligning the technology to the flourishing of people and their interests—addressing issues about AI’s impact on privacy remains a pivotal question.