ABSTRACT

The goal of this chapter is to examine how technologies—including computer programs and data analytics—have biases or preferences. The discussion about whether technology does things or has preferences emanates from a concern as to who is responsible for outcomes. The arguments traditionally fall into two camps: those that focus on the technology as the actor that “does” things and is at fault (technological determinists), and those that focus on the users of that technology as determining the outcome (social determinists). The readings chosen take a different approach by acknowledging the value-laden biases of technology—including data analytics—while preserving the ability of humans to control the design, development, and deployment of technology. Readings included are from Langdon Winner, Batya Friedmand and Helen Nissenbaum, and Grabrielle Johnson. The cases include the vaccine allocation algorithm from Stanford hospital and a health-care allocation algorithm.