Image fusion is the process of combining information from two or more sensed or acquired images into a single composite image that is more informative and more suitable for visual perception and computer or visual processing. The objective is to reduce uncertainty, minimize redundancy in the output, and maximize relevant information pertaining to an application or a task. For example, if a visual image is fused with a thermal image, a target that is warmer or colder than its background can be easily identifi ed, even when its color and spatial details are similar to those of its background. In image fusion, the image data appear in the form of arrays of numbers, which represent brightness (intensity), color, temperature, distance, and other scene properties. These data could be 2D or 3D. The 3D data are essentially the volumetric images and/or video sequences in the form of spatial-temporal volumes. The approaches for image fusion are (1) hierarchical image decomposition, (2) neural networks in fusion of visible and IR images, (3) laser detection and ranging (LADAR) and passive IR images for target segmentation, (4) discrete wavelet transforms (DWTs), (5) principal component analysis (PCA), and (6) principal components substitution (PCS).