ABSTRACT

A look at historical case studies shows the reader how the people handle the liability of automated systems. In a recent paper, Elish examined the aftermath of the tragedy and identified an important pattern in the way the public came to understand what happened. Media portrayals, in particular, perpetuated the belief that the sophisticated autopilot system bore no fault in the matter despite significant human-factors research demonstrating that humans have always been rather inept at leaping into emergency situations at the last minute with a level head and clear mind. Regulators should also have more nuanced conversations about what kind of framework would help distribute liability fairly. “At stake in the concept of the moral crumple zone is not only how accountability may be distributed in any robotic or autonomous system,” she writes, “but also how the value and potential of humans may be allowed to develop in the context of human-machine teams”.