On Unique Decodability
/ Authors
/ Abstract
In this paper, we propose a revisitation of the topic of unique decodability and of some fundamental theorems of lossless coding. It is widely believed that, for any discrete source <i>X</i>, every ldquouniquely decodablerdquo block code satisfies <i>E</i>[<i>l</i>(<i>X</i> <sub>1</sub>, <i>X</i> <sub>2</sub>,..., <i>X</i> <sub>n</sub>)]ges<i>H</i>(<i>X</i> <sub>1</sub>, <i>X</i> <sub>2</sub>,..., <i>X</i> <sub>n</sub>) where <i>X</i> <sub>1</sub>, <i>X</i> <sub>2</sub>,..., <i>X</i> <sub>n</sub> are the first <i>n</i> symbols of the source, <i>E</i>[<i>l</i>(<i>X</i> <sub>1</sub>, <i>X</i> <sub>2</sub>,..., <i>X</i> <sub>n</sub>)] is the expected length of the code for those symbols, and <i>H</i>(<i>X</i> <sub>1</sub>, <i>X</i> <sub>2</sub>,..., <i>X</i> <sub>n</sub>) is their joint entropy. We show that, for certain sources with memory, the above inequality only holds when a limiting definition of ldquouniquely decodable coderdquo is considered. In particular, the above inequality is usually assumed to hold for any ldquopractical coderdquo due to a debatable application of McMillan's theorem to sources with memory. We thus propose a clarification of the topic, also providing an extended version of McMillan's theorem to be used for Markovian sources.
Journal: IEEE Transactions on Information Theory