## Entropy

Today was the first day of classes here at Cornell and as usual, I attend to a lot of different classes to try to decide which ones to take. I usually feel like I wanted to take them all, but there is this constant struggle: if I take too many classes I have no time to do research and to read random things that happen to catch my attention at that moment, and if I don’t take many classes I feel like not learning a lot of interesting stuff I wanted to be learning. The solution in the middle of the way is to audit a lot of classes and start dropping them as a start needing more time: what happens usually quickly. This particular fall I decided that I need to build a stronger background in probability – since I am finding a lot of probabilistic stuff in my way and I have nothing more than my undergrad course and things I learned on demand. I attended at least three probability classes with different flavours today and I decided to blog about a simple, yet very impressive result I saw in one of them.

Since I took a class on “Principles of Telecommunications” in my undergrad, I became impressed by Shannon’s Information Theory and the concept of entropy. There was one theorem that I always heard about but never saw the proof. I thought it was a somewhat complicated proof, but it turned out not to be that much.

Consder an alphabet and a probability distribution over it. I want to associate to each a string of -digits to represent each simbol of the alphabet. One way of allowing the code to be decodable is to make them a proper code. A proper code is a code such that given any and , is not a prefix of . There are several codes like this, but some are more efficient then others. Since the letters have different frequencies, it makes sense to code a frequent letter (say ‘e’ in English) with few bits and a letter that doesn’t appear much, say ‘q’ with more bits. We want to find a proper code to minimize:

The celebrated theorem by Shannon shows that for any proper code (actually it holds more generally for any decodable code), we have where is the entropy of the alphabet, defined as:

even more impressive is that we can achieve something very close to it:

Theorem 1There is a code such that .

With an additional trick we can get for any . The first part is trickier and I won’t do here (but again, it is not as hard as I thought it would be). For proving that there is a code with average length we use the following lemma:

Lemma 2There is a proper code for with code-lengths if and only if

*Proof:* Let and imagine all the possible codewords of length as a complete binary tree. Since it is a proper code, no two codes and are in the same path to the root. So, picking one node as a codeword means that we can’t pick any node in the subtree from it. Also, for each leave, the is at most one codeword in its path to the root. Therefore we can assign each leaf of the tree to a single codeword or to no codeword at all. It is easy to see that a codeword with size has associated with it leaves. Since there are leaves in total, we have that:

what proves one direction of the result. Now, to prove the converse direction, we can propose a greedy algorithm: given and such that , let . Now, suppose . Start with leaves in a whole block. Start dividing them in blocks and assign one to . Now we define the recursive step: when we analyze , the leaves are divided in blocks, some occupied, some not. Divide each free block in blocks and assign one of them to . It is not hard to see that each block corresponds to one node in the tree (the common ancestor of all the leaves in that block) and that it corresponds to a proper code.

Now, using this we show how to find a code with with . For each , since we can always find such that . Now, clearly:

and:

Cool, but now how to bring it to ? The idea is to code multiple blocks at the same time (even if they are independent, we are not taking advantage of correlation between the blocks). Consider and the probability function induced on it, i.e.:

It is not hard ot see that with has entropy because:

and then we can just apply the last theorem to that: we can find a function that codifies symbols with symbols such that:

since codifies symbols, we are actually interested in and therefore we get: