ABSTRACT

To explain the differences of our method (see Figure 2.3), let us have a look at the computation of standard SSAO [Shanmugam and Arikan 07] first: for each pixel in the frame buffer, we inspect some neighboring pixels and read their depth values. This allows us to compute the corresponding three-dimensional position and we can place a small sphere with a user-defined radius there. Now an occlusion value is computed for each sphere, which depends on the solid angle of the sphere with regard to the receiver point. These occlusion values are all accumulated into a single ambient occlusion value. Finally, the unoccluded illumination from all directions (e.g., from a set of point lights extracted from an environment map) is computed with the standard GPU pipeline and the AO value is multiplied with this unoccluded illumination.