ABSTRACT

In Chapter 6, we examine a little understood aspect of autonomous vehicle technology: vision capture. While most attention relating to driverless cars has focused on the legal and safety concerns associated with removing a driver’s control, fundamental to the long-term success of autonomous vehicles is the development of visual capture and processing - that is, the processes that are involved in the mapping, sensing and real-time dynamic visual data processing within dynamic urban environments. To become properly integrated into urban, suburban or country transport systems, autonomous vehicles have to change or accelerate in response to the way those spaces, and all their component objects, obstacles and subjects, can be seen or rendered visible, mapped, and transposed into real-time traversable data. In examining vision capture and processing by autonomous vehicles, we ask how an AV learns to see, and we explore the human and social challenges that visual processing technologies place in the path of autonomous vehicle developers. This chapter also considers the concept of technological affordances, as studied by William Gibson, in relation to a wholly complex ecosystem. Driverless cars stretch the sense in which a machine can be defined by its affordances for seeing and acting in its environment. With this comes a host of ethical and other questions regarding how and what it sees and why and when it acts or, crucially, when it doesn’t act.