The data or information fusion process can take place at different levels, such as the signal, pixel, feature, and symbolic levels. Signal-level processing and data fusion have been discussed in Part I. In Part II, we discussed some aspects of decision fusion based on fuzzy logic. In the pixel-level fusion process, a composite image is built from several input images based on their respective pixels (picture elements). One of the applications is the fusion of forward-looking infrared (FLIR) and low-light visible images (LLTV) obtained by an airborne sensor system to aid the pilot navigate in poor weather conditions and darkness. In pixel-level fusion, some basic aspects of the fusion result are as follows [7-10]: (1) the data fusion (DF) process should carry over all the useful and relevant information from the input images to the composite image, to the greatest extent possible; (2) the DF scheme should not introduce any additional inconsistencies not originally present in the input images, which would distract the observer or other subsequent processing stages; and (3) the DF process should be shift and rotational invariant. The fusion results should not depend on the location or orientation of an object from the input images. Additionally, there should be (1) temporal stability, that is, the gray level changes in the fused image sequence should only be caused by gray level changes in the input sequences, and not by the fusion process; and (2) temporal consistency, that is, gray level changes occurring in the input image sequences should be present in the fused image sequence without any delay or contrast change.