ABSTRACT

Layered video coding, also called scalable coding, was originally pro­ posed by the author to increase the robustness of video codecs against packet (cell) loss in ATM networks (1). At the time (late 1980s), H.261 was under de­ velopment and it was clear that purely interframe-coded video by the proposed method was very vulnerable to loss of information. The idea was that the codec should generate two bit-streams, one carrying the most vital video information the base layer, and the other carrying the residual information to enhance the quality of the base layer image, the enhancement layer. In the event of network congestion, only the less important enhancement data should be discarded, and the space made available for base layer data. Such a methodology had an influ­ ence on the formation of ATM cell structure, to provide two levels of priority for protecting base layer data (2). This form of two-layer coding is now known as SNR scalability in the standard codecs, and currently a variety of new two-layer coding techniques exist. They now form the basic scalability functions of the standard video codecs such as MPEG-2 for high-quality video internetworking,

H.263+/MPEG-4 for mobile applications, and still image coding with JPEG2000 (3-6).