ABSTRACT

Governments around the world use machine learning in automated decision-making systems for a broad range of functions. However, algorithmic bias in machine learning can result in automated decisions that produce disparate impact and may compromise Charter guarantees of substantive equality. This book seeks to answer the question: what standards should be applied to machine learning to mitigate disparate impact in government use of automated decision-making?

The regulatory landscape for automated decision-making, in Canada and across the world, is far from settled. Legislative and policy models are emerging, and the role of standards is evolving to support regulatory objectives. While acknowledging the contributions of leading standards development organizations, the authors argue that the rationale for standards must come from the law and that implementing such standards would help to reduce future complaints by, and would proactively enable human rights protections for, those subject to automated decision-making. The book presents a proposed standards framework for automated decision-making and provides recommendations for its implementation in the context of the government of Canada’s Directive on Automated Decision-Making.

As such, this book can assist public agencies around the world in developing and deploying automated decision-making systems equitably as well as being of interest to businesses that utilize automated decision-making processes.