ABSTRACT

Looking back over the past several hundred years, we notice that following the invention and evolution of optical systems (telescopes, microscopes, eyeglasses, cameras, etc.) the optical image was formed on the human retina, photographic plate, or films. The birth of photodetectors can be dated back to 1873 when Smith discovered photoconductivity in selenium. Progress was slow until 1905, when Einstein explained the newly observed photoelectric effect in metals, and Planck solved the blackbody emission puzzle by introducing the quanta hypothesis. Applications and new devices soon flourished, pushed by the dawning technology of vacuum tube sensors developed in the 1920s and 1930s, culminating in the advent of television (TV). Zworykin and Morton, the celebrated fathers of videonics, on the last page of their legendary book Television (1939) concluded that “when rockets will fly to the moon and to other celestial bodies, the first images we will see of them will be those taken by camera tubes, which will open to mankind new horizons.” Their foresight became a reality with the Apollo and Explorer missions. Photolithography enabled the fabrication of silicon monolithic imaging focal planes for the visible spectrum beginning in the early 1960s. Some of these early developments were intended for a picturephone, other efforts were for TV cameras, satellite surveillance, and digital imaging. Infrared (IR) imaging has been vigorously pursued in parallel with visible imaging because of its utility in military applications. More recently (1997), the charge-coupled device (CCD) camera aboard the Hubble space telescope delivered a deep-space picture, a result of 10 days’ integration, featuring galaxies of the thirtieth magnitude—an unimaginable figure even for astronomers of our generation. Thus, photodetectors continue to open to mankind the most amazing new horizons.