A. Video Singal Processing Slides Review(1)
Video Compression:
􀂊 For an uncompressed digitized NTSC video signal with 720 pixels times 480 lines, 8 bits per pixel for each colour component, and 30 frames per second, it requires a bandwidth of 720x480x8x3x30=249M bits per second (bps) for transmission.
􀂊 For transmitting the video through a 10M bps LAN (Local Area Network) with an average throughput of 500k bps, it requires a compression ratio of 498:1 for real-time transmission.
􀂊 For storing one hour of the uncompressed video signal in harddisk, it requires 112G bytes disk space.
􀂊 Owing to these high compression requirements, the lossy compression techniques are much suitable compared with the lossless ones.
 
􀂊 The most basic approach to compress a digital video signal is on a frame by frame basis. This approach achieves compression by exploiting the spatial, spectral and pyschovisual redundancies of a single frame.
􀂊 The compression ratio is limited since it does not exploit the temporal redundancies between neighbor frames.
􀂊 Thus the interframe techniques are widely adopted by various video compression standards to reduce temporal redundancies of successive frames, in addition to the intraframe techniques.
􀂊 Among various inter/intra-frame compression techniques, the motion compensated transform coding technique is the most popular one, which is adopted by many video coding standards such as MPEG-1/2 and H.261/262/263, owing to its high compression efficiency.
 
Basics of Information Theory
􀂊 According to Shannon, the entropy of an information source S is defined as:
   H(S) =  Sum(Pi*Log2(1/Pi))
where pi is the probability that symbol Si in S will occur.
􀂊 Log2(1/pi) indicates the amount of information contained in Si, i.e., the number of bits needed to code Si.
􀂊 For example, in an image with uniform distribution of gray-level intensity, i.e. pi = 1/256, then the number of bits needed to code each gray level is 8 bits. The entropy of this image is 8.
􀂊 Entropy is only depends on probability, not depends on the actual values. Entropy is the average amount of information contained in random variable (S), and is the smallest possible expected numbers of bits needed to encode a sequence of events. If algorithm efficiency greater than H(S) => waste.
 
Shannon-Fano Algorithm
A top-down approach
1. Sort symbols according to their frequencies/probabilities, e.g., ABCDE.
2. Recursively divide into two parts, each with approx. same number of counts.
Huffman Coding: Encoding for Huffman Algorithm
1. Initialization: Put all nodes in an OPEN list, keep it sorted at all times (e.g., ABCDE).
2. Repeat until the OPEN list has only one node left:
􀂄 From OPEN pick two nodes having the lowest frequencies/probabilities, create a parent node of them.
􀂄 Assign the sum of the children’s frequencies/probabilities to the parent node and insert it into OPEN.
􀂄 Assign code 0, 1 to the two branches of the tree, and delete the children from OPEN.
Adaptive Huffman Coding
Encoder:
Initialize_model();
while((c = getc ( input )) != eof)
{
    encode (c, output);
    update_model (c);
}
Decoder:
Initialize_model();
while((c = decode ( input )) != eof)
{
    putc (c, output);
    update_model (c);
}
The key is to have both encoder and decoder to use exactly the same initialization and update_model routines.
Lempel-Ziv-Welch Algorithm
􀂊 Motivation: Suppose we want to encode the Webster’s English dictionary which contains about 159,000 entries. Why not just transmit each word as an 18 bit number?
􀂊 Problems: (a) Too many bits, (b) everyone needs a dictionary, (c) only works for English text.
􀂊 Solution: Find a way to build the dictionary adaptively.
􀂊 Original methods due to Ziv and Lempel in 1977 and 1978. Terry Welch improved the scheme in 1984 (called LZW compression).
 
LZW Compression Algorithm:
w = NIL;
while (read a character k )
 
 
 
Advertisements