ABSTRACT

This chapter asks why data scientists persistently produce methods capable of harmful applications, despite abundant evidence of destructive effects, and responds by examining thinking among AI scientists. Following a brief definitional introduction to AI intended for readers outside the field of AI, including the context for the development of AI in the United States, goals, and assumptions, the chapter identifies the regime of truths to which AI scientists subscribe. These truths – the assumptions underlying sophisticated methods – help explain problematic results insofar as they ignore issues of context that mediate intended effects. This section makes use of Thomas Kuhn’s influential scholarship, which argued that scientific paradigms – modes of knowing the world – evolve to a point of normalization in which assumptions become so ingrained that they become taken for granted and no longer receive scrutiny. I suggest we have arrived at this point. The following section, which makes use of Kuhn’s later scholarship, examines the governance of research mainly in US universities and in the private sector, and finds that research across all disciplines, including AI, are protected from external criticism, that is, criticism emanating from outside a discipline. This point helps explain the dogged persistence of wide-ranging problematic research applications.