ABSTRACT

Chapter 4 considers the automating and augmenting of mobile vision capture. The chapter opens with a historical account of key steps and stages in the development of camera automation and of the incorporation of cameras into mobile phones. Then, in the second part of the chapter, the emphasis shifts. Looking beyond the predominant focus on camera phone practices in studying the application of current mobile media, we turn to a consideration of the autonomous production of information and metadata, including geo-locative metadata, that accompany and are left behind by camera phone and smartphone use, as well as to the relative autonomy of mobile imaging and the increasing value of visual data. The chapter then explores the transition toward a more embedded, everyday activation of augmented reality. Taking Google’s Project Tango as a focus, our argument in the second part of the chapter is to view this project as emblematic of the current and likely future trajectory of personal mobile media as involving sophisticated sensing and visual processing power. Through developments like Project Tango, mobiles provoke questions about how individual local environments become visible and socially available, and what the possibilities, and possible implications, are of seeing and acting digitally beyond human senses.