ABSTRACT

Owing to the growing ubiquity of digital image acquisition and display, several factors must be considered when developing systems to meet future color image processing needs, including improved quality, increased throughput, and greater cost-effectiveness [1], [2], [3]. In consumer still-camera and video applications, color images are typically obtained via a spatial subsampling procedure implemented as a color filter array (CFA), a physical construction whereby only a single component of the color space is measured at each pixel location [4], [5], [6], [7]. Substantial work in both industry as well as academia has been dedicated to postprocessing this acquired raw image data as part of the so-called image processing pipeline, including in particular the canonical demosaicking task of reconstructing a full color image from the spatially subsampled and incomplete data acquired using a CFA [8], [9], [10], [11], [12], [13]. However, as we detail in this chapter, the inherent shortcomings of contemporary CFA designs mean that subsequent processing steps often yield diminishing returns in terms of image quality. For example, though distortion may be masked to some extent by motion blur and compression, the loss of image quality resulting from all but the most computationally expensive state-of-the-art methods is unambiguously apparent to the practiced eye. Refer to Chapters 1 and 3 for additional information on single-sensor imaging fundamentals.