Spatial redundancy is the consequence of the correlation in horizontal and the vertical spatial dimensions between neighboring pixel values within the same picture or frame of video (also known as intra-picture correlation). Neighboring pixels in a video frame are often very similar to each other, especially when the frame is divided into the luma and the chroma components. A frame can be divided into smaller blocks of pixels to take advantage of such pixel correlations, as the correlation is usually high within a block. This implies that, in a frequency-domain representation of the video frame, most of the energy is often concentrated in the low-frequency region, and high-frequency edges are relatively rare.

The redundancy present in a frame depends on several parameters. For example, the sampling rate, the number of quantization levels, and the presence of source or sensor noise.

## Temporal Redundancy

Temporal redundancy is due to the correlation between different pictures or frames in a video (also known as inter-picture correlation). There is a significant amount of temporal redundancy present in digital videos. A video is frequently shown at a frame rate of more than 15 frames per second (fps) in order for a human observer to perceive a smooth, continuous motion; this requires neighboring frames to be very similar to each other. It may be noted that a reduced frame rate would result in data compression, but that would be at the expense of perceptible *flickering artifact*.

Thus, a frame can be represented in terms of a neighboring reference frame and the difference information between these frames. Because an independent frame is reconstructed at the receiving end of a transmission system, it is not necessary for a dependent frame to be transmitted. Only the difference information is sufficient for the successful reconstruction of a dependent frame using a prediction from an already received reference frame. Due to temporal redundancy, such difference signals are often quite small. Only the difference signal can be coded and sent to the receiving end, while the receiver can combine the difference signal with the predicted signal already available and obtain a frame of video, thereby achieving very high amount of compression

## Statistical Redundancy

In information theory, redundancy is the number of bits used to transmit a signal minus the number of bits of actual information in the signal, normalized to the number of bits used to transmit the signal. The goal of data compression is to reduce or eliminate unwanted redundancy. Video signals contain statistical redundancy in its digital representation; that is, there are usually extra bits that can be eliminated before transmission.

For example, a region in a binary image (e.g., a fax image or a video frame) can be viewed as a string of 0s and 1s, the 0s representing the white pixels and 1s representing the black pixels. These strings, where the same bit occurs in a series or *run* of consecutive data elements, can be represented using run-length codes; these codes the address of each string of 1s (or 0s) followed by the length of that string. For example, 1110 0000 0000 0000 0000 0011 can be coded using three codes (1,3), (0,19), and (1,2), representing 3 1s, 19 0s, and 2 1s. Assuming only two symbols, 0 and 1, are present, the string can also be coded using two codes (0,3) and (22,2), representing the length of 1s at locations 0 and 22.

Run-length coding is a lossless data compression technique and is effectively used in compressing quantized coefficients, which contains runs of 0s and 1s, especially after discarding high-frequency information.

## Entropy Coding

Consider a set of quantized coefficients that can be represented using *B* bits per pixel. If the quantized coefficients are not uniformly distributed, then their entropy will be less than *B* bits per pixel. Now, consider a block of *M* pixels. Given that each bit can be one of two values, we have a total number of *L* = 2^{MB} different pixel blocks.

For a given set of data, let us assign the probability of a particular block *i* occurring as *p*_{i}, where *i* = 0, 1, 2, ···, *L* − 1. Entropy coding is a lossless coding scheme, where the goal is to encode this pixel block using − log_{2}*p*_{i} bits, so that the average bit rate is equal to the entropy of the *M* pixel block: *H* = ∑ _{i}*p*_{i}(−log_{2}*p*_{i}). This gives a variable length code for each block of *M* pixels, with smaller code lengths assigned to highly probable pixel blocks. In most video-coding algorithms, quantized coefficients are usually run-length coded, while the resulting data undergo entropy coding for further reduction of statistical redundancy.

For a given block size, a technique called *Huffman coding* is the most efficient and popular variable-length encoding method, which asymptotically approaches Shannon’s limit of maximum achievable compression.