ABSTRACT

J(x,y) = i(x,y) r(x,y) (1)

where 0 :s r(x,y) :s 1. Since l(x,y) is positive and bounded, we will have some bounding intensities I min and I max, and it is common to normalize I by shifting the origin to /min· In this case, we often speak of /min = 0 as black and /max as white. (Often we further normalize by dividing l(x,y) by /max• but in digital image processing we also find it convenient to have /max = 2N-1 for suitable N.)

Use ofthe words "black" and "white" arises from our visual perception of scenes in the world. Many image processing techniques originate in the desire to overcome limitations imposed by the visual system or by whatever may have recorded or transmitted a representation of the scene. From certain common experiences, we can deduce that our visual perceptions cannot be determined on a pointwise basis solely from the intensities in the image. For example, the black ink on this page has a reflectance much less than 0.1, but the outdoor illumination on a bright day is more than 100 times that in a typical office. Thus, Eq. (1) shows that outdoors, the text on this page might have 10 times the intensity the white part has when viewed indoors. Nevertheless, after a suitable period of adaptation, we perceive both cases to be black text on a white page. Contemporary explanations of visual perception often rest on models arising from image processing theory [3]. This is not surprising, since, as we remarked above, the historical purpose of image processing techniques was to enhance perception. Conversely, in recent years, substantial attention has been given to modeling machine vision on the neural models underlying visual image processing [4,5].