Why does the GIF specification require at least 2 bits for the initial LZW code size?

I was trying to figure out why the GIF89a specification requires an initial LZW code size of at least 2 bits, even when encoding 1-bit images (B&W). Appendix F of the specification states the following:

SET CODE SIZE

The first byte of the compressed data stream is a value that indicates the minimum number of bits required to represent a set of actual pixel values. This will usually be the same as the number of color bits. However, due to some algorithmic limitations, black and white images having one color bit must be listed as having a code size of 2.

I am curious what these algorithmic limitations are. What could prevent the LZW variant used in GIF from using the size 1 code? Is this just a limitation of early encoders or decoders? Or is there some weird edge case that can only show up with the right bit combination? Or is something completely different happening here?

+3


source to share


2 answers


In addition to the codes for 0

and 1

, you also have code clear

and code end of information

.

Quote from the spec :



Output codes are variable in length, ranging from +1 bits per code to 12 bits per code. This defines the maximum code value of 4095 (0xFFF). Whenever the LZW code value exceeds the current code length, the code length is increased by one.

If you start with a code size of 1, the code size should be increased immediately by this rule.

+1


source


This limitation gets rid of one if

in the implementation (with code size == 1, the first code of the dictionary phrase will have width == codeize + 2, in all other cases width == codize + 1).
The drawback is very small in the compression ratio for two-color images.



+1


source







All Articles