ABSTRACT

We usually think of physical space as embedded in a three-dimensional Euclidean space where measurements of lengths and angles make sense. It turns out that for artificial systems, such as robots, this is not a mandatory viewpoint and it is sometimes sufficient to think of the physical space as being embedded in an affine or even projective space. The question then arises of how to relate these geometric models to image measurements and to geometric properties of camera sets.

We first consider that the world is modelled as a projective space and determine how projective invariant information can be recovered from the images and used in applications. Next we consider that the world is an affine space and determine how affine invariant information can be recovered from the images and used in applications. Finally, we do not move to the Euclidean layer because this is the layer where everybody else has been working from the early days on, but rather to an intermediate level between the affine and Euclidean ones. For each of the three layers we explain various calibration procedures, from fully automatic ones to ones that use some a priori information. The calibration increases in difficulty from the projective to the Euclidean layer at the same time as the information that can be recovered from the images becomes more and more specific and detailed. The two main applications we consider are the detection of obstacles and the navigation of a robot vehicle.