ABSTRACT

Implementation Guidelines As stated before, the use of the NPC’s vision to adapt its trajectory involves a rendering step and a processing step, as shown in Figure 18.2. In the rendering step, the camera is placed at the eye location of the NPC, and then the surrounding environment is rendered to a texture from this point of view. However, instead of rendering the visual aspect of the obstacles such as color, texture, and surface normal, some kinematic properties are associated with each pixel, such as relative position and relative velocity. The vision based technique employs programmable shaders in order to efficiently compute some motion variables with respect to each obstacle from the NPC’s perspective. Therefore, this process aims

Figure 18.2. Algorithm overview for VBLPP techniques.