ABSTRACT

This chapter presents a graphics processor unit (GPU)-based algorithm to perform real-time terrain subdivision and rendering of vast detailed landscapes. There are a lot of existing terrain rendering and subdivision algorithms that achieve fantastic results, and fast frame rates. They can require the preprocessing of large data sets, constant transferring of large data sets from the CPU to the GPU, limited viewing areas, or complex algorithms to merge together meshes in efforts to avoid cracks and seams as a result of various level-of-detail (LOD) subdivision techniques. Improvements to the subdivision algorithm and resulting data amplification could be made to better utilize hardware depth culling when rendering the resulting mesh. Increasing the subdivision granularity per iteration, and adding the subdivided regions to the output streams in increasing order of average depth would enable the final rendering pass to benefit more from depth culling.