ABSTRACT

With recent advancements in real-time graphics, we have seen a vast improvement in pixel rendering quality and frame buffer resolution. However, those complex shading operations are becoming a bottleneck for current-generation consoles in terms of processing power and bandwidth. We would like to build upon the observation that under certain circumstances, shading results are temporally or spatially coherent. By utilizing that information, we can reuse pixels in time and space, which effectively leads to performance gains. This chapter presents a simple, yet powerful, framework for spatiotemporal acceleration of visual data computation. We exploit spatial coherence for geometry-aware upsampling and filtering. Moreover, our framework combines motionaware filtering over time for higher accuracy and smoothing, where required. Both steps are adjusted dynamically, leading to a robust solution that deals sufficiently with high-frequency changes. Higher performance is achieved due to smaller sample counts per frame, and usage of temporal filtering allows convergence to maximum quality for near-static pixels. Our method has been fully production proven and implemented in a multiplatform engine, allowing us to achieve production quality in many rendering effects that were thought to be impractical for consoles. An example comparison of screen-space ambient occlusion (SSAO) implementations is shown in Figure 7.1. Moreover, a case study is presented, giving insight to the framework usage and performance with some complex rendering stages like screen-space ambient occlusion, shadowing, etc. Furthermore, problems of functionality, performance, and aesthetics are discussed, considering the limited memory and computational power of current-generation consoles.