ABSTRACT

Exploring potential scenarios of artificial intelligence regulation which prevent automated reality harming individual human rights or social values, this book reviews current debates surrounding AI regulation in the context of the emerging risks and accountabilities. Considering varying regulatory methodologies, it focuses mostly on EU’s regulation in light of the comprehensive policy making process taking place at the supranational level.

Taking an ethics and humancentric approach towards artificial intelligence as the bedrock of future laws in this field, it analyses the relations between fundamental rights impacted by the development of artificial intelligence and ethical standards governing it. It contains a detailed and critical analysis of the EU’s Ethic Guidelines for Trustworthy AI, pointing at its practical applicability by the interested parties. Attempting to identify the most transparent and efficient regulatory tools that can assure social trust towards AI technologies, the book provides an overview of horizontal and sectoral regulatory approaches, as well as legally binding measures stemming from industries’ self-regulations and internal policies.