ABSTRACT

Pushing the ethical evaluation of the design and development of AI and data analytics generally to outsiders has implications as to how corporations critically evaluate their technology: who can ask questions, what questions can be asked, and how any critical, ethical evaluation is performed. And these are the types of issues being debated in corporations and in academic research right now around computer science and data analytics. Whether corporations should be responsible to critically evaluate their own technology is not established and some in computer science debate whether researchers should have to include ethics statements in their work. Corporations have pushed back against being responsible for the moral implications of their data analytics programs by limiting the type of research conducted in the organization or by outside researchers. We tackle this hard-to-square position—to not critically examine their own work but also not allow others access to examine their products—through readings and a particular case on Google Research and research on AI ethics and readings by Richard Rudner and Kirsten Martin.