Archive

Posts Tagged ‘algorithms’

Three interesting puzzles

September 29th, 2011 No comments

Here are three puzzles I got from 3 different people recently. The first I got from Charles, who got it from a problem set of the Brazilian Programming Competition.

Puzzle #1: Given n and a vector of in-degrees and out-degrees \text{in}_i and \text{out}_i for i=1..n, find if there is a simple directed graph on n nodes with those in and out-degrees in time O(n \log n). By a simple directed graph, I mean at most one edge between each pair (i,j), allowing self loops (i,i).

Solving this problem in time O(n^3) is easy using a max-flow computation – simply consider a bipartite graph with and edge (i,j) between each pair with capacity 1. Add a source and connect to each node in the left size with capacity \text{in}_i and add also a sink in the natural way, compute max-flow and check if it is \sum_i \text{in}_i. But it turns out we can do it in a lot more efficient way. The solution I thought works in time O(n \log n) but maybe there is a linear solution out there. If you know of one, I am curious.

The second puzzle was given to me by Hyung-Chan An:

Puzzle #2: There is grid of size n \times m formed by rigid bars. Some cells of the grid have a rigid bar in the diagonal, making that whole square rigid. The question is to decide, given a grid and the location of the diagonal bars if the entire structure is rigid or not. By rigid I mean, being able to be deformed.

We thought right away in a linear algebraic formulation: look at each node and create a variable for each of the 4 angles around it. Now, write linear equations saying that some variables sum to 360, since they are around one node. Equations saying that some variable must be 90 (because it is in a rigid cell). Now, for the variables internal to each square, write that opposite angles must be equal (since all the edges are of equal length) and then you have a linear system of type Ax = b where x are the variables (angles). Now, we need to check if this system admits more then one solution. We know a trivial solution to it, which is all variable is 90. So, we just need to check if the matrix A has full rank.

It turns out this problem has a much more beautiful and elegant solution and it is totally combinatorial – it is based on verifying that a certain bipartite graph is connected. You can read more about this solution in Bracing rectangular frameworks. I by (Bolker and Crapo 1979). A cute idea is to use the the following more general linear system (which works for rigidity in any number of dimensions). Consider a rigid bar from point p_a \in \mathbb{R}^n to point p_b \in \mathbb{R}^n. If the structure is not rigid, then there is a movement it can make: let v_a and v_b be the instantaneous velocities of points a and b. If p_a(t), p_b(t) are the movements of points a,b, then it must hold that: \Vert p_a(t) - p_b(t) \Vert = \Vert p_a(0) - p_b(0) \Vert, so taking derivatives we have:

(p_a(0) - p_b(0)) \cdot (v_a - v_b) = 0

This is a linear system in the velocities. Now, our job is to check if there are non zero velocities, which again is to check that the matrix of the linear system is or is not full-rank.  An interesting thing is that if we look at this question for the grid above, this matrix will be the matrix of a combinatorial problem! So we can simply check if it has full rank by solving the combinatorial problem. Look at the paper for more details.

The third puzzle I found in the amazing website called The Puzzle Toad, which is CMU’s puzzle website:

Puzzle #3: There is a game played between Arthur and Merlin. There is a table with n lamps disposed in a circle, initially some are on and some are off. In each timestep, Arthur writes down the position of the lamps that are off. Then Merlin (in an adversarial way) rotates the table. The Arthur’s servant goes and flips (on –> off, off –> on) the lamps whose position Arthur wrote down (notice now he won’t be flipping the correct lamps, since Merlin rotated the table. if Arthur wrote lamp 1 and Merlin rotated the table by 3 positions, the servant will actually be flipping lamp 4. The question is: given n and an initial position of the table, is there a strategy for Merlin  such that Arthur never manages to turn all the lamps on.

See here for a better description and a link to the solution. For n = 2^k no matter what Merlin does, Arthur always manages to turn on all the lamps eventually, where eventually means in O(n^2) time. The solution is a very pretty (and simple) algebraic argument. I found this problem really nice.

Categories: puzzles, theory Tags: ,

Bimatrix games

May 25th, 2011 No comments

This week in Israel is the Workshop on Innovations in Algorithmic Game Theory, which has been a collection of amazing talks and I thought of blogging about something new I learned here. First, Paul Goldberg gave an amazing talk about the Lemke-Howson algorithm and Homotopy Methods. Later, during the poster session Jugal Garg presented the impressive work on an exact algorithm for finding Nash equilibria in rank 1 bimatrix games. Both have in common the use of homotopy, which I found I quite cool idea – and has been present in Game Theory for a while, but I didn’t know about. They also have to do with improving the understanding we have on the Lemke-Howson Algorithm – a (potentially exponential time) algorithm to find Nash equilibrium in 2-person bi-matrix games, but that seems to work pretty well in practice. As always, I though that blogging about it would be a good idea in order to understand those concepts well.

Bimatrix Game and How to compute equilibrium

Bimatrix game is the simplest class of games studied. Basically it is a 2 player game, with n strategies for player 1 and m strategies for player 2 which is represented by a pair of n \times m matrices (A,B). Let x \in \mathbb{R}_+^n, \Vert x \Vert_1 = 1 represent a probability distribution that player 1 assigns to his strategies and y \in \mathbb{R}_+^m, \Vert y \Vert_1 = 1 be the same for player 2. This way, the players experience utilities:

[; u_1 (x,y) = x^t A y,  u_2(x,y) = x^t B y;]

The best understood class of those games is the one where A+B = 0, called zero-sum games. For this class, computing a Nash equilibrium is very easy and it is given by the famous min-max theorem: player 1 finds x maximizing \min_j (x^t A e_j) where e_j is the j-th unit vector. Similarly player 2 finds y maximizing \min_i (e_i^t B y). Then the pair of strategies obtained is a Nash equilibrium – and this verifying that is not hard.

When A+B \neq 0, the problems gets a lot more complicated. Proving that equilibrium exist can be done using various fixed point theorems, as Brouwer or Kakutani. There is a very simple exponential time algorithm for finding it and the key observation is the following, if (x,y) is a Nash equilibrium then:

\forall i: x_i = 0 \text{ or } \forall i': e_i^t A y \geq e_{i'}^t A y

\forall j: y_j = 0 \text{ or } \forall j': x^t B e_j \geq x^t B e_{j'}

which means that each strategy for player i is either in the support or is a best response. Proving that is trivial (if some strategy is in the support and is not a best response, then reducing the probability we play it is an improving deviation). Therefore if we just guess the support of x and the support of y we just need to find some strategies with this support satisfying the inequalities above. This can be done using a simple LP. This is clearly not very efficient, since it involves solving 2^{n+m} LPs. A still exponential, but a lot more practical method, is:

Lemke-Howson Algorithm

A good overview of the L-H algorithm can be found in those lecture notes or in a more detailed version in third chapter of the AGT book (a pdf  can be found in Tim’s website). Here I’ll present a quick overview. The main idea is to define the best-response polytopes P and Q:

[; P = \{(x,v) \in \mathbb{R}^{n+1} \vert x_i \geq 0, x^t B e_j \leq v; \textbf{1}^t x = 1 \};]

[; Q = \{(y,u) \in \mathbb{R}^{m+1} \vert e_i^t A y \leq u, y_j \geq 0; \textbf{1}^t y = 1 \} ;]

The intuition is that a point (x,v) \in P represents the fact that the payoff v of player 2 when player 1 plays x is at least v. We could re-write as v \geq \max_j x^t B e_j.

Each of the polytopes has m+n inequalities and 1 equality. Given x we define L(x) as the indices of the tight inequalities in P and similarly we define L(y) as the indices of the tight inequalities in Q. The theorem in the previous section can be rephrased as:

(x,y) is Nash equilibrium iff L(x) \cup L(y) = \{1, 2, \hdots, n+m\}

So we need to look at points in the polytope below that are fully-labeled. And the way that this is done in L-H is quite ingenious – it is a similar idea that is used by the Simplex Method – walking through the vertices of the polytope looking for the desired vertex. In order to do it, let’s define another polytope:

[;P' = \{x \in \mathbb{R}^n \vert x_i \geq 0, x^t B e_j \leq 1\} ;]

[;Q' = \{y \in \mathbb{R}^m \vert e_i^t A y \leq 1, y_j \geq 0\} ;]

Note that there is a clear bijection between P' \setminus 0 \leftrightarrow P and Q' \setminus 0 \leftrightarrow Q. This is a projective transformation given by x \mapsto (\frac{x}{\Vert x \Vert_1},\Vert x \Vert_1). Notice that the labels are preserved by this transformation and the vertex and edges of the polytope are mapped almost 1-1 (except the vertex 0). Notice a couple of details:

1. A vertex in x \in P' corresponds to a point with n labels. A vertex in y \in Q'' corresponds to a point with m labels.

2. The point (0,0) corresponds to a fully-labeled point of P' \times Q', unfortunately this is the only fully-labeled point that doesn’t correspond to a Nash equilibrium.

3. By taking an edge of the polytope (which is define by n-1 labels in P' and m-1 labels in Q'. We can move between two nodes that are almost fully labeled.

The idea is to consider the set of nodes that are almost fully-labeled: fix some label k and consider all the nodes such that L(x) \cup L(y) \supseteq \{1 .. k-1, k+1 .. n+m\}. Those points are either (0,0), a Nash equilibrium or they have one duplicated label, since \vert L(x) \vert + \vert L(y) \vert = n+m. An edge is composed of n+m-1 labels (no duplicated labels). So, one almost fully-labeled point that is not a Nash or (0,0) is only connected to two other vertices via an edge (which correspond to dropping one of the duplicated labels and following the corresponding edge).

This gives us a topology of the space of Nash equilibria. This tells us a couple of facts: (1) we can find a Nash equilibrium by starting in (0,0) and following the L-H path. In the end of the path, we must find a Nash equilibrium. (ii) If the polytope is not degenerate, then there is an odd number of Nash equilibria and they are connected by L-H edges in the following way:

where the blue dots are the Nash equilibria, the white dots are the almost fully-labeled points and the edges are the L-H edges. The number of Nash equilibria are odd by a simple parity argument.

The classic way in which simplex-like methods walk through the vertices of a polytope is by pivoting. The same way we can implement Lemke-Howson. For an explanation on the implementation, please look at the chapter 3 of the AGT book.

One can ask if we could modify the L-H path following to go through the path faster. Recently, Goldberg, Papadimitriou and Savani proved that finding the Nash equilibrium that L-H outputs is PSPACE-complete. So, in principle, finding this specific equilibrium, seems harder than finding any Nash, which is PPAD-complete.

My current plan is on some other blog posts on Bimatrix games. I wanted to discuss two things: first the homotopy methods and homotopy fixed-point-theorems (which is the heart of the two papers mentioned above) and about new techniques for solving zero-matrix where the strategy space is exponential.

Submodular Allocation Problem

April 6th, 2011 No comments

I am in Israel for the Algorithmic Game Theory Semester in the Center for the Study of Rationality. It is great to both explore Jerusalem and learn about games and algorithms. I think it is a great opportunity to start blogging again. To start, I decided to write about simple and beautiful algorithm by Lehman, Lehman and Nisan on the allocation problem when players have submodular valuations.

Consider a set of m items and n agents. Each agent has a monotone submodular v_i valuation over the items,  i.e., v_i:2^{[m]} \rightarrow \mathbb{R} s.t. v_i(S) + v_i(T) \geq v_i(S \cap T) + v_i(S \cup T) for any subsets S,T of [m] and f(S) \leq f(T) for S \subseteq T. Now, the goal is to partition the items in n sets S_1, \hdots, S_n in order to maximize \sum_i v_i(S_i).

This problem is clearly NP-hard (for example, we can reduce from Maximum Coverage or any similar problem), but is has a very simples Greedy Approximation. The approximation goes as follows: start with all sets being empty, i.e., start with S_i = \emptyset, \forall i then for each item j, find the player i with maximum v_i(j \vert S_i) = v_i(S_i \cup \{j\}) - v_i(S_i) and add j to this player. This is a 2-approximation algorithm. The proof is simple:

Let S_1, \hdots, S_n be the sets returned by the algorithm and O_1, \hdots, O_n the optimal solution. Let also S_i^{<j} = S_i \cap \{1, 2, \hdots, j-1\} and O_i^{<j} = O_i \cap \{1, 2, \hdots, j-1\}. We can write:

\sum_i v_i(S_i) = \sum_i \sum_{j \in S_i} v_i(j \vert S_i^{<j})

if we added j to set S_i it means that v_i(j \vert S_i^{<j}) \geq v_k(j \vert S_k^{<j}) by the Greedy rule. Therefore we can write:

\sum_i v_i(S_i) \geq \sum_k \sum_{j \in O_k} v_k(j \vert S_k^{<j}) \geq \sum_k \sum_{j \in O_k} v_k(j \vert S_k^{<j} \cup O_k^{<j})

where the first inequality follows from the Greedy rule and the second follows from submodularity. Now, we can simply write:

\begin{aligned} \sum_i v_i(S_i) & \geq \frac{1}{2} \sum_i \sum_{j \in S_i} v_i(j \vert S_i^{<j}) + \frac{1}{2} \sum_k \sum_{j \in O_k}  v_k(j \vert S_k^{<j} \cup O_k^{<j}) \\ & \geq \frac{1}{2} \sum_i \sum_{j \in S_i} v_i(j \vert S_i^{<j}  \cup O_i^{<j}) + \frac{1}{2} \sum_k \sum_{j \in O_k}  v_k(j \vert  S_k^{<j} \cup O_k^{<j}) \\ & \geq \frac{1}{2} \sum_i \sum_{j \in S_i \cup O_i} v_i(j \vert S_i^{<j} \cup O_i^{<j}) \\ & = \frac{1}{2} \sum_i v_i(S_i \cup O_i) \geq \frac{1}{2} \sum_i v_i(O_i) \end{aligned}

An improved algorithm was given by Dobzinski and Shapira achieving an 1 -\frac{1}{e} approximation using demand queries – that are used as a separation oracle for a suitable linear program.

Categories: theory Tags: ,