ABSTRACT

LiDAR sensors are becoming increasingly widespread in applications requiring 3D imaging or proximity sensing, such as autonomous navigation or machine vision. These optical sensors typically use one of two types of time-of-flight (ToF) approach, indirect time-of-flight (iToF) and direct time-of-flight (dToF), which emit a wave or light pulse to illuminate a scene of interest and time the returning, back-scattered photons to estimate distance, as in Figure 1.1. As the speed of light in the air is constant, the duration for incident photons to return is directly proportional to the distance of the object from the sensor. A sensor is made up of many pixels which are able to independently measure the ToF of objects within the field-of-view (FoV). The difference between iToF and dToF lies in the methodology of time measurement. iToF-based sensors do not directly measure the time between emitted and received pulses, instead, it measures the delay between signals by integrating the received signal during specific windows of time synchronized with the emitted signal.