Archive

Posts Tagged ‘coding theory’

More about hats and auctions

October 29th, 2009 No comments

In my last post about hats, I told I’ll soon post another version with some more problems, which I ended up not doing and would talk a bit more about those kind of problems. I ended up not doing, but here are a few nice problems:

Those {n} people are again a room, each with a hat which is either black or white (picked with probability {\frac{1}{2}} at random) and they can see the color of the other people’s hats but they can’t see their own color. They write in a piece of paper either “BLACK” or “WHITE”. The whole team wins if all of them get their colors right. The whole team loses, if at least one writes the wrong color. Before entering the room and getting the hats, they can strategyze. What is a strategy that makes them win with {\frac{1}{2}} probability?

If they all choose their colors at random, the probability of winning is very small: {\frac{1}{2^n}}. So we should try to correlate them somehow. The solution is again related with error correcting codes. We can think of the hats as a string of bits. How to correct one bit if it is lost? The simple engineering solution is to add a parity check. We append to the string {x_0, x_1, \hdots, x_n} a bit {y = \sum_i x_i \mod 2}. So, if bit {i} is lost, we know it is {x_i = (y + \sum_{j \neq i} x_j) \mod 2}. We can use this idea to solve the puzzle above: if hats are places with {\frac{1}{2}} probability, the parity check will be {0} with probability {\frac{1}{2}} and {1} with probability {\frac{1}{2}}. They can decide before hand that everyone will use {y = 0} and with probability {\frac{1}{2}} they are right and everyone gets his hat color right. Now, let’s extend this problem in some ways:

The same problem, but there are {k} hat colors, they are choosen independently with probability {\frac{1}{k}} and they win if everyone gets his color right. Find a strategy that wins with probability {\frac{1}{k}}.

There are again {k} hat colors, they are choosen independently with probability {\frac{1}{k}} and they win if at least a fraction {f} ({0 < f < 1}) of the people guesses the right color. Find a strategy that wins with probability {\frac{1}{fk}}.

Again to the problem where we just have BLACK and WHITE colors, they are chosen with probability {\frac{1}{2}} and everyone needs to find the right color to win, can you prove that {\frac{1}{2}} is the best one can do? And what about the two other problems above?

The first two use variations of the parity check idea in the solution. For the second case, given any strategy of the players, for each string {x \in \{0,1\}^n} they have probability {p_x}. Therefore the total probability of winning is {\frac{1}{2^n}\sum_{x \in \{0,1\}^n} p_x}. Let {x' = (1-x_1, x_2, \hdots, x_n)}, i.e., the same input but with the bit {1} flipped. Notice that the answer of player {1} is the same (or at least has the same probabilities) in both {x} and {x'}, since he can’t distinguish between {x} and {x'}. Therefore, {p_{x} + p_{x'} \leq 1}. So,

\displaystyle 2 \frac{1}{2^n}\sum_{x \in \{0,1\}^n} p_x = \frac{1}{2^n}\sum_{x \in \{0,1\}^n} p_x + \frac{1}{2^n}\sum_{x \in \{0,1\}^n} p_x' \leq 1

. This way, no strategy can have more than {\frac{1}{2}} probability of winning.

Another variation of it:

Suppose now we have two colors BLACK and WHITE and the hats are drawn from one distribution {D}, i.e., we have a probability distribution over {x \in \{0,1\}^n} and we draw the colors from that distribution. Notice that now the hats are not uncorrelated. How to win again with probability {\frac{1}{2}} (to win, everyone needs the right answer).

I like a lot those hat problems. A friend of mine just pointed out to me that there is a very nice paper by Bobby Kleinberg generalizing several aspects of hat problems, for example, when players have limited visibility of other players hats.

hats1

I began being interested by this sort of problem after reading the Derandomization of Auctions paper. Hat guessing games are not just a good model for error correcting codes, but they are also a good model for truthful auctions. Consider an auction with a set {N} single parameter agents, i.e., an auction where each player gives one bid {b_i} indicating how much he is willing to pay to win. We have a set of constraints: {\mathcal{X} \subseteq 2^N} of all feasible allocations. Based on the bids {(b_i)_{i \in N}} we choose an allocation {S \in \mathcal{X}} and we charge payments to the bidders. An example of a problem like this is the Digital Goods Auction, where {\mathcal{X} = 2^N}.

In this blog post, I discussed the concept of truthful auction. If an auction is randomized, an universal truthful auction is an auction that is truthful even if all the random bits in the mechanism are revealed to the bidders. Consider the Digital Goods Auction. We can characterize universal truthful digital goods auction as bid-independent auctions. A bid-independent auction is given by function {f_i(b_{-i})}, which associated for each {b_{-i}} a random variable {f_i(b_{-i})}. In that auction, we offer the service to player {i} at price {f_i(b_{-i})}. If {b_i \geq f_i(b_{-i})} we allocate to {i} and charge him {f_i(b_{-i})}. Otherwise, we don’t allocate and we charge nothing.

It is not hard to see that all universal truthful mechanisms are like that: if {x_i(b_i)} is the probability that player {i} gets the item bidding {b_i} let {U} be an uniform random variable on {[0,1]} and define {f_i(b_{-i}) = x_i^{-1}(U)}. Notice that here {x_i(.) = x_i(., b_{-i})}, but we are inverting with respect to {b_i}. It is a simple exercise to prove that.

With this characterization, universal truthful auctions suddenly look very much like hat guessing games: we need to design a function that looks at everyone else’s bid but not on our own and in some sense, “guesses” what we probably have and with that calculated the price we offer. It would be great to be able to design a function that returns {f(b_{-i}) = b_i}. That is unfortunately impossible. But how to approximate {b_i} nicely? Some papers, like the Derandomization of Auctions and Competitiveness via Consensus use this idea.

Hats, codes and puzzles

October 3rd, 2009 No comments

When I was a child someone told me the following problem:

A king promised to marry his daughter to the most intelligent man. Three princes came to claim her hand and he tryed the following logic experiment with them: The princes are gathered into a room and seated in a line, one behind the other, and are shown 2 black hats and 3 white hats. They are blindfolded, and 1 hat is placed on each of their heads, with the remaining hats hidden in a different room. The first one to deduce his hat color will marry the princess. If some prince claims his hat color incorrectly he dies.

The prince who is seated behind removes his blindfold, sees the two hats in front of him and says nothing. Then the prince in the middle removes his blindfold after that and he can see the hat of the prince in front of him. He also says nothing. Noticing the other princes said nothing, the prince seated in the first whole, without even removing his blindfold, gives the correct answer? The question is: what is the color he said?

This is a simple logical puzzle: we just write all the possibilities and start ruling them out given that the other princes didn’t answer and in the end we can find the color of his hat. I remember that this puzzle surprised me a lot as a kid. A found it extremly cool by then, what made me want to read books about logic problems. After that, I had a lot of fun reading the books by Raymond Smullyan. I usually would read the problems, think something like: there can’t ba a solution to that. Then go to school with the problem in mind and spend the day thinking about that. Here is a problem I liked a lot:

There is one prisoner and there are two doors: each has one guardian. One door leads to an exit and one door leads to death. The prisioner can choose one door to open. One guardian speaks only the truth and one guardian always lies. But you don’t know which door is which, which guardian is which and who guards each door. You are allowed to choose one guardian and make him one Yes/No question, and then you need to choose a door. What is the right question to ask?

hats1

But my goal is not to talk about logic puzzles, but about Hat problems. There are a lot of variations of the problems above: in all of them a person is allowed to see the other hats but not his own hat and we need to “guess” which is the color of our hat. If we think carefully, we will see that this is a very general kind of problem in computer science: (i) the whole goal of learning theory is to predict one thing from a lot of other things you observe; (ii) in error correcting code, we should guess one bit from all the others, or from some subset of the others; (iii) in universal truthful mechanisms, we need to make a price offer to one player that just depends on all other players bids. I’ll come back to this example in a later post, since it is what made me interested in those kinds of problems, but for now, let’s look at one puzzle I was told about by David Malec at EC’09:

There are black and white hats and {3} people: for each of them we choose one color independently at random with probability {\frac{1}{2}}. Now, they can look at each others hats but not at their own hat. Then they need to write in a piece of paper either “PASS” or one color. If all pass or if someone has a wrong color, the whole team loses (this is a team game) and if at lest one person gets the color right and no one gets wrong, the whole team wins. Create a strategy for the team to win with {\frac{3}{4}} probability.

To win with {\frac{1}{2}} probability is easy: one person will always write “BLACK” and the others “PASS”. A better strategy is the following: if one person sees two hats of equal color, he writes the opposite color, otherwise, he passes. It is easy to see the team wins except in the case where all hats are the same color, what happens with {\frac{1}{4}} probability. We would like to extend this to a more general setting:

There are black and white hats and {2^k - 1} people: for each of them we choose one color independently at random with probability {\frac{1}{2}}. Now, they can look at each others hats but not at their own hat. Then they need to write in a piece of paper either “PASS” or one color. If all pass or if someone has a wrong color, the whole team loses (this is a team game) and if at lest one person gets the color right and no one gets wrong, the whole team wins. Create a strategy for the team to win with {1-\frac{1}{2^k}} probability.

It is a tricky question on how to extend the above solution in that case. A detailed solution can be found here. The idea is quite ingenious, so I’ll sketch here. It envolves Error Correcting Code, in that case, the Hamming Code. Let {F = \{0,1\}} with sum and product modulo {2}. Let {w_1, \hdots, 2^{2^k-1}} be the non-zero vector of {F^k} and the following linear map:

\displaystyle \begin{aligned} \phi: F^{2^k-1} \rightarrow F^k \\ (a_1,\hdots, a_{2^k-1}) \mapsto \sum_i a_i w_i \end{aligned}

Let {H} be the kernel of that application. Then, it is not hard to see that {H, H+e_1, \hdots, H+e_{2^k-1}} is a partition of {F^{2^k-1}} and also that because of that fact, for each {x \in F^{2^k-1}} either {x \in H} or exists a unique {i} s.t. {x + e_i \in H}. This gives an algorithm for just one player to guess his correct color. Let {x \in F^{2^k-1}} be the color vector of the hats. Player {i} sees this vector as:

\displaystyle (x_1, \hdots, x_{i-1}, ?, x_{i+1}, \hdots, x_n)

which can be {(x_1, \hdots, x_{i-1}, 0, x_{i+1}, \hdots, x_n)} or {(x_1, \hdots, x_{i-1}, 1, x_{i+1}, \hdots, x_n)}. The strategy is: if either one of those vector is in {H}, write the color corresponding to the other vector. If both are out of {H}, pass. The team wins iff {x \notin H}, what happens with {1 - \frac{1}{2^k}} probability. Is is an easy and fun exercise to prove those facts. Or you can refer to the solution I just wrote.

hats2

Now, we can complicate it a bit more: we can add other colors and other distributions. But I wanted to move to a different problem: the paper Derandomization of Auctions showed me an impressive thing: we can use coding theory to derandomize algorithms. To illustrate their ideas, they propose the following problem:

Color guessing problem: There are {n} people wearing hats of {k} different colors. If each person can see everyone else’s hats but not his or her own. Each person needs to guess the color of his own hat. We want a deterministic guessing algorithm that {1/k} fraction of each color class is guessed correctly.

The problem is very easy if we have a source of random bits. Each person guesses some color at random. It seems very complicated to do that without random bits. Surprisingly, we will solve that using a flow computation:

Let {c = (c_1, \hdots, c_n)} be an array of colors {c_{-i}} the array with color {i} removed. Consider the following flow network: nodes {s} and {t} (source and sink), nodes {v_{c_{-i}}} for each {c_{-i}}. There are {n \cdot k^{n-1}} such nodes. Consider also nodes in the form {u_{\gamma, c})} where {\gamma} is a color ({1, \hdots, k}) and {c} is a color vector. There are {k^{n+1}} such nodes.

hats_net

We have edges from {s} to {v_{c_{-i}}} for all nodes of that kind. And we have edges from {u_{\gamma, c})} to {t}. Now, if {c = (\gamma, c_{-i})}, i.e., if {c_{-i}} completed in the {i}-th coordinate with {\gamma} generates {c}, then add an edge from {v_{c_{-i}}} to {u_{\gamma, c})}.

Consider the following flow: add {1} unit of flow from {s} to {v_{c_{-i}}} and from {v_{c_{-i}}} split that flow in pieces of size {1/k} and send each to {u_{\gamma, c}} for {c = (\gamma, c_{-i})}. Now, each node {u_{\gamma, c_{-i}}} receives {\frac{n_\gamma(c)}{\gamma}} flow, where {n_{\gamma}(c)} is the number of occurencies of {\gamma} in {c}. Send all that flow to {t}.

We can think of that flow as the guessing procedure. When we see {c_{-i}} we choose the guess independently at random and this way, each {c} receives in expectation {\frac{n_\gamma(c)}{\gamma}} guesses {\gamma}. Notice that an integral flow in that graph represents a deterministic guessing procedure: so all we need is an integral flow so that the flow from {u_{\gamma, c}} to {t} is {\lfloor n_\gamma (c) / k \rfloor }. The flow received is from nodes of the type: {v_{c_{-i}}} and that means that bidder {i} in {c}, looking at the other hats will correctly choose {c_i}, {\lfloor n_\gamma (c) / k \rfloor } times.

Now, define the capacities this way: for all edges from {s} to {v_{c_{-i}}} and from {v_{c_{-i}}} to {u_{\gamma, c}} have capacity {1} and from {u_{\gamma, c}} to {t} capacity {\lfloor n_\gamma (c) / k \rfloor }. There is an integral flow that saturates all edges from {u} to {t}, because of the fractional flow showed. So, the solution gives us a deterministic decision procedure.

In the next blog post, I’ll try to show the result in the Derandomization of Auctions that relates that to competitive auctions.