ABSTRACT

This chapter focuses on the conceptualization of algorithms used in decision-making as value-laden, rather than neutral, decision-making to discuss their ethical implications. The omnipresent use of ‘‘intelligent’’ algorithms in the public sector, with emphasis on AI technologies, requires greater ethical scrutiny and a relatively prudent approach regarding any implementation of AI-based technologies on a broad scale. The chapter briefly introduces some key definitions of AI and Machine Learning in order to guide the reader and provides an overview of machine ethics as a normative framework of the ethics of algorithms. Building a normative framework should be based on six key areas of ethical aspects of algorithms including inconclusive evidence, inscrutable evidence, misguided evidence, unfair outcomes, transformative effects, and traceability. Decision-making driven by algorithms leads to the incapacity of data subjects to define privacy norms to govern all types of data generically because their value is only established through processing.