ABSTRACT

This chapter discusses tilting LIDAR, stereo vision, and depth cameras. The LIDAR points, originating in polar coordinates with respect to a moving slice plane, are converted into Cartesian coordinates in the world frame, represented as a point cloud. By setting the persistence value in rviz, one can display the points acquired over many LIDAR scans. The point-cloud view appears similar to that of the stereo-camera display in rviz. With the multi camera simulation running in Gazebo and the stereoimageproc node running, one can bring in models for the stereo system to view. Although a video camera does not provide depth information, depth can be inferred via triangulation from known calibration properties and presumed correspondence of pixels in multiple cameras. Depth is inferred from distortion of a projected infra-red speckle pattern. This leads to a surprisingly good depth image, although there are limitations.