Archive

Posts Tagged ‘game theory’

Bimatrix games

May 25th, 2011 No comments

This week in Israel is the Workshop on Innovations in Algorithmic Game Theory, which has been a collection of amazing talks and I thought of blogging about something new I learned here. First, Paul Goldberg gave an amazing talk about the Lemke-Howson algorithm and Homotopy Methods. Later, during the poster session Jugal Garg presented the impressive work on an exact algorithm for finding Nash equilibria in rank 1 bimatrix games. Both have in common the use of homotopy, which I found I quite cool idea – and has been present in Game Theory for a while, but I didn’t know about. They also have to do with improving the understanding we have on the Lemke-Howson Algorithm – a (potentially exponential time) algorithm to find Nash equilibrium in 2-person bi-matrix games, but that seems to work pretty well in practice. As always, I though that blogging about it would be a good idea in order to understand those concepts well.

Bimatrix Game and How to compute equilibrium

Bimatrix game is the simplest class of games studied. Basically it is a 2 player game, with n strategies for player 1 and m strategies for player 2 which is represented by a pair of n \times m matrices (A,B). Let x \in \mathbb{R}_+^n, \Vert x \Vert_1 = 1 represent a probability distribution that player 1 assigns to his strategies and y \in \mathbb{R}_+^m, \Vert y \Vert_1 = 1 be the same for player 2. This way, the players experience utilities:

[; u_1 (x,y) = x^t A y,  u_2(x,y) = x^t B y;]

The best understood class of those games is the one where A+B = 0, called zero-sum games. For this class, computing a Nash equilibrium is very easy and it is given by the famous min-max theorem: player 1 finds x maximizing \min_j (x^t A e_j) where e_j is the j-th unit vector. Similarly player 2 finds y maximizing \min_i (e_i^t B y). Then the pair of strategies obtained is a Nash equilibrium – and this verifying that is not hard.

When A+B \neq 0, the problems gets a lot more complicated. Proving that equilibrium exist can be done using various fixed point theorems, as Brouwer or Kakutani. There is a very simple exponential time algorithm for finding it and the key observation is the following, if (x,y) is a Nash equilibrium then:

\forall i: x_i = 0 \text{ or } \forall i': e_i^t A y \geq e_{i'}^t A y

\forall j: y_j = 0 \text{ or } \forall j': x^t B e_j \geq x^t B e_{j'}

which means that each strategy for player i is either in the support or is a best response. Proving that is trivial (if some strategy is in the support and is not a best response, then reducing the probability we play it is an improving deviation). Therefore if we just guess the support of x and the support of y we just need to find some strategies with this support satisfying the inequalities above. This can be done using a simple LP. This is clearly not very efficient, since it involves solving 2^{n+m} LPs. A still exponential, but a lot more practical method, is:

Lemke-Howson Algorithm

A good overview of the L-H algorithm can be found in those lecture notes or in a more detailed version in third chapter of the AGT book (a pdf  can be found in Tim’s website). Here I’ll present a quick overview. The main idea is to define the best-response polytopes P and Q:

[; P = \{(x,v) \in \mathbb{R}^{n+1} \vert x_i \geq 0, x^t B e_j \leq v; \textbf{1}^t x = 1 \};]

[; Q = \{(y,u) \in \mathbb{R}^{m+1} \vert e_i^t A y \leq u, y_j \geq 0; \textbf{1}^t y = 1 \} ;]

The intuition is that a point (x,v) \in P represents the fact that the payoff v of player 2 when player 1 plays x is at least v. We could re-write as v \geq \max_j x^t B e_j.

Each of the polytopes has m+n inequalities and 1 equality. Given x we define L(x) as the indices of the tight inequalities in P and similarly we define L(y) as the indices of the tight inequalities in Q. The theorem in the previous section can be rephrased as:

(x,y) is Nash equilibrium iff L(x) \cup L(y) = \{1, 2, \hdots, n+m\}

So we need to look at points in the polytope below that are fully-labeled. And the way that this is done in L-H is quite ingenious – it is a similar idea that is used by the Simplex Method – walking through the vertices of the polytope looking for the desired vertex. In order to do it, let’s define another polytope:

[;P' = \{x \in \mathbb{R}^n \vert x_i \geq 0, x^t B e_j \leq 1\} ;]

[;Q' = \{y \in \mathbb{R}^m \vert e_i^t A y \leq 1, y_j \geq 0\} ;]

Note that there is a clear bijection between P' \setminus 0 \leftrightarrow P and Q' \setminus 0 \leftrightarrow Q. This is a projective transformation given by x \mapsto (\frac{x}{\Vert x \Vert_1},\Vert x \Vert_1). Notice that the labels are preserved by this transformation and the vertex and edges of the polytope are mapped almost 1-1 (except the vertex 0). Notice a couple of details:

1. A vertex in x \in P' corresponds to a point with n labels. A vertex in y \in Q'' corresponds to a point with m labels.

2. The point (0,0) corresponds to a fully-labeled point of P' \times Q', unfortunately this is the only fully-labeled point that doesn’t correspond to a Nash equilibrium.

3. By taking an edge of the polytope (which is define by n-1 labels in P' and m-1 labels in Q'. We can move between two nodes that are almost fully labeled.

The idea is to consider the set of nodes that are almost fully-labeled: fix some label k and consider all the nodes such that L(x) \cup L(y) \supseteq \{1 .. k-1, k+1 .. n+m\}. Those points are either (0,0), a Nash equilibrium or they have one duplicated label, since \vert L(x) \vert + \vert L(y) \vert = n+m. An edge is composed of n+m-1 labels (no duplicated labels). So, one almost fully-labeled point that is not a Nash or (0,0) is only connected to two other vertices via an edge (which correspond to dropping one of the duplicated labels and following the corresponding edge).

This gives us a topology of the space of Nash equilibria. This tells us a couple of facts: (1) we can find a Nash equilibrium by starting in (0,0) and following the L-H path. In the end of the path, we must find a Nash equilibrium. (ii) If the polytope is not degenerate, then there is an odd number of Nash equilibria and they are connected by L-H edges in the following way:

where the blue dots are the Nash equilibria, the white dots are the almost fully-labeled points and the edges are the L-H edges. The number of Nash equilibria are odd by a simple parity argument.

The classic way in which simplex-like methods walk through the vertices of a polytope is by pivoting. The same way we can implement Lemke-Howson. For an explanation on the implementation, please look at the chapter 3 of the AGT book.

One can ask if we could modify the L-H path following to go through the path faster. Recently, Goldberg, Papadimitriou and Savani proved that finding the Nash equilibrium that L-H outputs is PSPACE-complete. So, in principle, finding this specific equilibrium, seems harder than finding any Nash, which is PPAD-complete.

My current plan is on some other blog posts on Bimatrix games. I wanted to discuss two things: first the homotopy methods and homotopy fixed-point-theorems (which is the heart of the two papers mentioned above) and about new techniques for solving zero-matrix where the strategy space is exponential.

Walrasian Equilibrium I

February 15th, 2010 No comments

Currently I’ve been trying to understand more about the dynamics of markets and basic concepts of microeconomic theory and, as always, writing a blog post will help me to keep my ideas clear. First, why are markets interesting from a computer scientist/mathematician point of view?

  • Markets are multi-objective optimization problems: one can think of the possible state of a market some point in a space of possible {x = (x_1, \hdots, x_N) \in \Omega}. Each player of the market controls one variable, say {x_i} and is interested in maximizing one objective function {f_i(x)}. So, player {i} is trying to set {x_i = \text{argmax}_{x_i} f_i(x_i, x_{-i})}.
  • Markets are a computational model: one can think of a market as a way of performing a certain computation – as extracting some kind of information, as a prediction market, stock exchanges, … If we think of it as a computational device, we are asking the same questions: given those preferences which are implicit functions to each of the agents, calculate “fair” prices of items.
  • Markets are distributed systems where each part of the system has a selfish interest.

A market is composed by a set {L} of commodities, {I} of consumers and {J} of producers. Now, we describe how to characterize each of them:

  • Each consumer is defined by a set {X_i \subseteq {\mathbb R}^L} of commodities combinations he is interested (typically we take {X_i = {\mathbb R}^L_+}) and an utility function {u_i : X_i \rightarrow {\mathbb R}} expressing his interest for this bundle of commodities. Consumer {i} will try to maximize {u_i(\cdot)} in a further restricted {X_i}.
  • Each producer is define by a set {Y_j \subset {\mathbb R}^L} it has the capacity to produce.
  • Endowments: Each consumer comes to the market with an initial endowment {\omega_i \in {\mathbb R}^L}, so for {j \in L}, {\omega_{ij}} is the amount of commodity {j} that consumer {i} originally has. The initial total endowment of the market is given by {\overline{\omega} = \sum_{i \in I} \omega_i}, which is a vector indicating how much of each commodity originally exists in the market.
  • Shares: consumers have shares in the companies, so for {i \in I}, {j \in J}, consumer {i} has {\theta_{ij}} shares of company {j}, such that {\sum_i \theta_{ij} = 1}.

Something very crucial is missing in this picture: a way to compare commodities and something that makes exchanges possible: the answer to that is to attribute prices to the items. How to attribute prices to the items so that the market works fine? A price vector is a vector {p \in {\mathbb R}^L_+}. Consider the following scenario after prices {p_\ell, \ell \in L} are established to commodities:

  • by producing {y_j \in Y_j}, company {j \in J} gets profit {p \cdot y_j}, so each company will try to maximize its profit producing {y_j^* \in \text{argmax}_{y_j \in Y_j} p \cdot y_j}.
  • each consumer sells its initial endowment and gets the profit respective to the companies he owns. So, consumer {i} gets {w_i = p \cdot \omega_i + \sum_{j \in J} \theta_{ij} p \cdot y_j^*}.
  • now consumer {i} uses the money he has to buy the best bundle he can afford, which is {x_i^* = \text{argmax} \{u_i(x_i); x_i \in X_i, p \cdot x_i \leq w_i\}}.

The amount of commodities in the market must conserve, so that is possible only if we get:

\displaystyle \sum_{i \in I} x_i^* = \sum_{i \in I} \omega_i + \sum_{j \in J} y_j^*

First, it is not clear if such a price vector exists. If it exists, is it unique? If this is an equilibrium, is it the best thing for the consumers? How those prices can be set in practice without a centralized authority? Can people lie? Below, let’s collect a couple of questions I’ll try to answer (yes, no or unknown) in this and the following posts.

Question 1: Does a price vector {p \geq 0} always exist that generates an equilibrium?

Question 2: If it exists, is it unique?

Question 3: Can we describe an efficent method to find {p} ?

Question 4: Is it the best thing for the consumers in the following sense: if {(x^*,y^*,p)} is an equilibrium, are there feasible {(x,y)} such that {u_i(x_i) \geq u_i(x_i^*)} and for at least one consumer {u_i(x_i) > u_i(x_i^*)}? (This is called Pareto improvement)

Question 5: A central authority could use the knowledge about functions {u_i} and endowments {\omega_i} to calculate the price vector {p} using some method. Can consumers {i} be better off by lieing about their utility and endowments?

Question 6: How prices get defined without a central authority? Is there a dynamic/game-theoretical model to that?

For simplicity, let’s think of Exchange Economies, which are economies with no producers. Let’s define it formally:

Definition 1 An exchange economy is composed by a set of commodities {L} and a set of consumers {I} each with an utility {u_i : X_i \rightarrow {\mathbb R}} and an initial endowment {\omega_i \in {\mathbb R}^L_+}.

Definition 2 A price vector {p \in {\mathbb R}^L_+} is a Walrasian equilibrium for an exchange economy {(L, I, \{X_i, u_i, \omega_i\}_{i \in I})} if there is {x_i^* \in X_i} such that:

  1. {x_i^*} s.t. {u_i(x_i^*) = \max \{u_i(x_i); x_i \in X_i, p \cdot x_i \leq p \cdot \omega_i\}}
  2. {\sum_i x_i^* \leq \sum_i \omega_i}
  3. {p \cdot (\sum_i x_i^* - \omega_i)= 0}

The first condition says that each consumer is maximizing his utility given his prices, the second says that we can’t buy more commodities than what is available in the market and the third, called Walras’ Law, says that if there is surplus of a certain product, it should have price zero. It is by far the most unnatural of those, but it can be easily justifiable in some circumnstances: suppose we say that utilities are non-satiated if for each {x_i \in X_i} and {\epsilon > 0}, there is {x'_i \in X_i}, {\Vert x_i - x'_i \Vert < \epsilon} such that {u_i(x'_i) > u_i(x_i)}. If {u_i} are differentiable, that would mean {\nabla u_i \neq 0}, for example a linear function {u_i(x_i) = \sum_\ell u_{i\ell} x_{i\ell}} with some {u_{i\ell} > 0}. In that case, {\sum_i (p \cdot x_i^* - p \cdot \omega_i) < 0} and some player has money surplus and therefore he could increase his utility.

Now, we define for each price vector {p} the excess demand function {z_i(p) = x_i(p) - \omega_i} and {z(p) = \sum_i z_i(p)}. Now, under non-satiated utilities, by the last argument, we have that {p} is an equilibrium vector iff {z(p) \leq 0}. Actually, if {u_i} are also strong monotone, i.e., {u_i(x_i + \xi) \geq u_i(x_i)} for each {\xi \geq 0}, then it becomes: {p} is an equilibrium iff {z(p) = 0}, which means that the market clears:

\displaystyle \sum_i x_i^* = \sum_i \omega_i

The question that is easier to answer is Question 4 and it is sometimes refered as the First Fundamental Theorem of Welfare Economics:

Theorem 3 Given non-satiated preferences, each equilibrium {(x,p)} is Pareto, i.e. there is no other feasible allocation {x'} such that for all {i}, {u_i(x'_i) \geq u_i(x_i)} with the inequality strict for at least one component.

Proof: Suppose there were, since {u_i(x'_i) \geq u_i(x_i)} then {p \cdot x'_i \geq p \cdot \omega_i}, because if {p \cdot x'_i < p \cdot \omega_i} then we could improve the utility of {x'_i} still within the budget, contradicting the optimality of {u_i(x_i)} for that budget. And clearly {u_i(x'_i) > u_i(x_i)} implies {p \cdot x'_i > p \cdot \omega_i}.

Summing over {i}, we get {\sum_i p x'_i > \sum_i p \omega_i}, what is a contradiction, because since {x'} is feasible, {\sum_i x'_i \leq \sum_i \omega_i} and therefore {\sum_i p \cdot x'_i \leq \sum_i p \cdot \omega_i}. \Box

Now, let’s tackle Question 1. We assume linearly of utility: {u_i(x_i) = \sum_{\ell \in L} u_{i \ell} x_{i \ell}} for {u_{i \ell} > 0}. This gives us strong monotonicity and local nonsatiated preferences.

Theorem 4 Under linear utilities, there is always an equilibrium price vector {p}.

Consider the function {z} defined above: {z(p) = \sum_i (x_i(p) - \omega_i)} where {x_i(p)} is the bundle of best possible utility. Now, since we are using linear utilities we can’t guarantee there will be only one such bundle, so instead of considering a function, consider {x_i} and {z} as being correspondences: {x_i, z: {\mathbb R}^L_+ \rightarrow \mathcal{P}({\mathbb R}^L)}, i.e., {x_i(p) \subseteq {\mathbb R}^L_+} is the set of all allocations that maximize {u_i(x_i)} subject to {p\cdot x_i \leq p \cdot \omega_i}. Since {u_i} are linear functionals, we can calculate {x_i(p)} by a Fractional Knapsack algorithm: we sort commodities {\ell \in L} by {\frac{p_\ell}{u_{i\ell}}} and start buying in the cost-benefit order (the ones that provide more utility per buck spent). Most of the time there will be just one solution, but in points where {\frac{p_\ell}{u_{i\ell}} = \frac{p_k}{u_{i k}}}, then {x_i(p)} might be a convex region. This correpondence is upper hemicontinuous, which is the correspondence analogue to continuity for functions. As Wikipedia defines:

Definition 5 A correspondence {\Gamma : A \rightarrow B} is said to be upper hemicontinuous at the point {a \in A} if for any open neighbourhood {V} of {\Gamma(a)} there exists a neighbourhood {U} of a such that {\Gamma(x)} is a subset of {V} for all {x} in {U}.

It is not hard to see that {z} is upper hemicontinuous according to that definition. Our goal is to prove that there is one price vector {p} for which {\overline{\omega} \in x_i(p)} or: {0 \in z(p)}. To prove that we use Kakutani’s Fixed Point Theorem. Before we go into that, we’ll explore some other properties of {z}:

  • 0-Homogeneous: {z(\alpha p) = z(p), \forall \alpha > 0}
  • Walras’ Law: {p \cdot z(p) = 0}. For any {\sum_i (x_i - \omega_i) \in z(p)} we know {\sum_i (p \cdot x_i - p \cdot \omega_i) \leq 0} by the definition of {x_i}. So, if it not zero, some {i} has money surplus what is absurd given that preferences are strongly monotone.
  • Bounded: {z(p)} is bounded from below, i.e., {z_\ell(p) \geq -s, \forall p, \ell} for some {s > 0}. Simply take {s = \max \omega_i}
  • Boundary behavior: if {p^k \rightarrow p \neq 0} with {p_\ell = 0}, then {z_\ell(p^k) \rightarrow \infty}. That is clear from the fractional knapsack algorithm when one desirable item gets price zero.

Now, we are in shape for applying Kakutani’s Fixed Point Theorem:

Theorem 6 (Kakutani, 1941) If {f:A \rightarrow \mathcal{P}(A)} is an upper hemicontinuous correspondence such that {f(a)} is a convex non-empty set for all {a \in A} then {f} has a fixed point, i.e., {\exists x \in A} s.t. {x \in f(x)}.

Since prices are {0}-homogeneous, consider the simplex {\Delta = \{p \geq 0; \sum_\ell p_\ell = 1\}}, its relative interior {\Delta^0 = \{p > 0; \sum_\ell p_\ell = 1\}} and the boundary {\partial \Delta = \Delta \setminus \Delta^0}. Now we define the following price correcting correspondence {f:\Delta \rightarrow \mathcal{P}(\Delta)}.

If some price {p \in \Delta^0} is set, it generates demand {d \in z(p)}. For that demand, the price that would maximize profit would be {q}, i.e. {q\cdot d \geq q' \cdot d} for all {q' \in \Delta}. It is natural to re-adjust the prices to {q}. So we define for {p \in \Delta^0}:

\displaystyle f(p) = \{q \in \Delta; q \text{ is a best response price to some } d \in z(p)\}

and for {p \in \partial \Delta}:

\displaystyle f(p) = \{q \in \Delta; q \cdot p = 0\}

Now, I claim that this correspondence satisfies the conditions in Kakutani’s Theorem. We skip a formal proof of this fact, but this is intuitive for the interior {\Delta^0} – let’s give the intuition why this is true as we approach the boundary: if {\Delta^0 \ni p^k \rightarrow p \in \partial \Delta}, then {p^k_\ell \rightarrow 0}, therefore the demans explodes: {z_\ell(p^k) \rightarrow \infty} and as a result the best thing to do is to set the prices of those commodities much higher than the rest. Therefore, the price of the commodities whose demand explode are positive while the prices of the commodities where the price doesn’t get value zero.

Now, after waiving our hands about the upper continuity of {f}, we have by Kakutani’s Theorem a point {p \in \Delta} such that {p \in f(p)}. By the definition of {f} we must have {p \in \Delta^0} (because for {p \in \partial \Delta}, {p \notin f(p)}. Now, I claim {z(p) = 0}. In fact if {z(p) \neq 0}, still {p z(p) = 0} by Walras’ Law. So, if {z(p) \neq 0} then there is {\ell} with {z_\ell (p) < 0} and therefore {q_\ell = 0} for all {q \in f(p)}, and {f(p) \subseteq \partial \Delta}. For this reason {z(p) = 0}.

In the next blog post (or serie of blog posts, let’s see) we discuss issues related to the other questions: uniqueness, dynamics, game-theoretical considerations, …

Hats, codes and puzzles

October 3rd, 2009 No comments

When I was a child someone told me the following problem:

A king promised to marry his daughter to the most intelligent man. Three princes came to claim her hand and he tryed the following logic experiment with them: The princes are gathered into a room and seated in a line, one behind the other, and are shown 2 black hats and 3 white hats. They are blindfolded, and 1 hat is placed on each of their heads, with the remaining hats hidden in a different room. The first one to deduce his hat color will marry the princess. If some prince claims his hat color incorrectly he dies.

The prince who is seated behind removes his blindfold, sees the two hats in front of him and says nothing. Then the prince in the middle removes his blindfold after that and he can see the hat of the prince in front of him. He also says nothing. Noticing the other princes said nothing, the prince seated in the first whole, without even removing his blindfold, gives the correct answer? The question is: what is the color he said?

This is a simple logical puzzle: we just write all the possibilities and start ruling them out given that the other princes didn’t answer and in the end we can find the color of his hat. I remember that this puzzle surprised me a lot as a kid. A found it extremly cool by then, what made me want to read books about logic problems. After that, I had a lot of fun reading the books by Raymond Smullyan. I usually would read the problems, think something like: there can’t ba a solution to that. Then go to school with the problem in mind and spend the day thinking about that. Here is a problem I liked a lot:

There is one prisoner and there are two doors: each has one guardian. One door leads to an exit and one door leads to death. The prisioner can choose one door to open. One guardian speaks only the truth and one guardian always lies. But you don’t know which door is which, which guardian is which and who guards each door. You are allowed to choose one guardian and make him one Yes/No question, and then you need to choose a door. What is the right question to ask?

hats1

But my goal is not to talk about logic puzzles, but about Hat problems. There are a lot of variations of the problems above: in all of them a person is allowed to see the other hats but not his own hat and we need to “guess” which is the color of our hat. If we think carefully, we will see that this is a very general kind of problem in computer science: (i) the whole goal of learning theory is to predict one thing from a lot of other things you observe; (ii) in error correcting code, we should guess one bit from all the others, or from some subset of the others; (iii) in universal truthful mechanisms, we need to make a price offer to one player that just depends on all other players bids. I’ll come back to this example in a later post, since it is what made me interested in those kinds of problems, but for now, let’s look at one puzzle I was told about by David Malec at EC’09:

There are black and white hats and {3} people: for each of them we choose one color independently at random with probability {\frac{1}{2}}. Now, they can look at each others hats but not at their own hat. Then they need to write in a piece of paper either “PASS” or one color. If all pass or if someone has a wrong color, the whole team loses (this is a team game) and if at lest one person gets the color right and no one gets wrong, the whole team wins. Create a strategy for the team to win with {\frac{3}{4}} probability.

To win with {\frac{1}{2}} probability is easy: one person will always write “BLACK” and the others “PASS”. A better strategy is the following: if one person sees two hats of equal color, he writes the opposite color, otherwise, he passes. It is easy to see the team wins except in the case where all hats are the same color, what happens with {\frac{1}{4}} probability. We would like to extend this to a more general setting:

There are black and white hats and {2^k - 1} people: for each of them we choose one color independently at random with probability {\frac{1}{2}}. Now, they can look at each others hats but not at their own hat. Then they need to write in a piece of paper either “PASS” or one color. If all pass or if someone has a wrong color, the whole team loses (this is a team game) and if at lest one person gets the color right and no one gets wrong, the whole team wins. Create a strategy for the team to win with {1-\frac{1}{2^k}} probability.

It is a tricky question on how to extend the above solution in that case. A detailed solution can be found here. The idea is quite ingenious, so I’ll sketch here. It envolves Error Correcting Code, in that case, the Hamming Code. Let {F = \{0,1\}} with sum and product modulo {2}. Let {w_1, \hdots, 2^{2^k-1}} be the non-zero vector of {F^k} and the following linear map:

\displaystyle \begin{aligned} \phi: F^{2^k-1} \rightarrow F^k \\ (a_1,\hdots, a_{2^k-1}) \mapsto \sum_i a_i w_i \end{aligned}

Let {H} be the kernel of that application. Then, it is not hard to see that {H, H+e_1, \hdots, H+e_{2^k-1}} is a partition of {F^{2^k-1}} and also that because of that fact, for each {x \in F^{2^k-1}} either {x \in H} or exists a unique {i} s.t. {x + e_i \in H}. This gives an algorithm for just one player to guess his correct color. Let {x \in F^{2^k-1}} be the color vector of the hats. Player {i} sees this vector as:

\displaystyle (x_1, \hdots, x_{i-1}, ?, x_{i+1}, \hdots, x_n)

which can be {(x_1, \hdots, x_{i-1}, 0, x_{i+1}, \hdots, x_n)} or {(x_1, \hdots, x_{i-1}, 1, x_{i+1}, \hdots, x_n)}. The strategy is: if either one of those vector is in {H}, write the color corresponding to the other vector. If both are out of {H}, pass. The team wins iff {x \notin H}, what happens with {1 - \frac{1}{2^k}} probability. Is is an easy and fun exercise to prove those facts. Or you can refer to the solution I just wrote.

hats2

Now, we can complicate it a bit more: we can add other colors and other distributions. But I wanted to move to a different problem: the paper Derandomization of Auctions showed me an impressive thing: we can use coding theory to derandomize algorithms. To illustrate their ideas, they propose the following problem:

Color guessing problem: There are {n} people wearing hats of {k} different colors. If each person can see everyone else’s hats but not his or her own. Each person needs to guess the color of his own hat. We want a deterministic guessing algorithm that {1/k} fraction of each color class is guessed correctly.

The problem is very easy if we have a source of random bits. Each person guesses some color at random. It seems very complicated to do that without random bits. Surprisingly, we will solve that using a flow computation:

Let {c = (c_1, \hdots, c_n)} be an array of colors {c_{-i}} the array with color {i} removed. Consider the following flow network: nodes {s} and {t} (source and sink), nodes {v_{c_{-i}}} for each {c_{-i}}. There are {n \cdot k^{n-1}} such nodes. Consider also nodes in the form {u_{\gamma, c})} where {\gamma} is a color ({1, \hdots, k}) and {c} is a color vector. There are {k^{n+1}} such nodes.

hats_net

We have edges from {s} to {v_{c_{-i}}} for all nodes of that kind. And we have edges from {u_{\gamma, c})} to {t}. Now, if {c = (\gamma, c_{-i})}, i.e., if {c_{-i}} completed in the {i}-th coordinate with {\gamma} generates {c}, then add an edge from {v_{c_{-i}}} to {u_{\gamma, c})}.

Consider the following flow: add {1} unit of flow from {s} to {v_{c_{-i}}} and from {v_{c_{-i}}} split that flow in pieces of size {1/k} and send each to {u_{\gamma, c}} for {c = (\gamma, c_{-i})}. Now, each node {u_{\gamma, c_{-i}}} receives {\frac{n_\gamma(c)}{\gamma}} flow, where {n_{\gamma}(c)} is the number of occurencies of {\gamma} in {c}. Send all that flow to {t}.

We can think of that flow as the guessing procedure. When we see {c_{-i}} we choose the guess independently at random and this way, each {c} receives in expectation {\frac{n_\gamma(c)}{\gamma}} guesses {\gamma}. Notice that an integral flow in that graph represents a deterministic guessing procedure: so all we need is an integral flow so that the flow from {u_{\gamma, c}} to {t} is {\lfloor n_\gamma (c) / k \rfloor }. The flow received is from nodes of the type: {v_{c_{-i}}} and that means that bidder {i} in {c}, looking at the other hats will correctly choose {c_i}, {\lfloor n_\gamma (c) / k \rfloor } times.

Now, define the capacities this way: for all edges from {s} to {v_{c_{-i}}} and from {v_{c_{-i}}} to {u_{\gamma, c}} have capacity {1} and from {u_{\gamma, c}} to {t} capacity {\lfloor n_\gamma (c) / k \rfloor }. There is an integral flow that saturates all edges from {u} to {t}, because of the fractional flow showed. So, the solution gives us a deterministic decision procedure.

In the next blog post, I’ll try to show the result in the Derandomization of Auctions that relates that to competitive auctions.