Home > puzzles > Prisioners and boxes

Prisioners and boxes

In EC, Sean, one of my friends from UBC, told me an interesting puzzle. I liked both the problem and the solution a lot and since the solution has a lot of interesting ideas, I felt like writing about it. EC also reminded me that puzzles are fun and that I should use a bit more of my time solving those nice math problems. The problem is like that:

There were 100 prisoners and 3 different rooms, say A, B and C. In the beginning they are all in room A. In room B there are 100 boxes, each one containing the name of a different prisoner. One at a time, the prisoners are brought from A to B and in B they can open 50 boxes. Then they are brought to room C. (They cannot change the state of room B, so it is the same of having 100 identical copies of room B and bringing each prisoner to one of the rooms. They cannot leave signals in the room). If each prisoner finds his owns name, they are all freed. They only think they can do is to agree on a certain strategy while they are all in room A and then follow that strategy. If they don’t succeed, the names are randomly rearranged and they can try again the next day. Find a strategy for them so that they succeed in less than one week (in expectation).

A few things I need to drawn your attention to: (i) it is purely a math puzzle, there are no word-games. If something is obscure, it was my fault and it was not intentional, (ii) the fact that names were rearranged in boxes randomly is very important. The solution doesn’t work if the distribution is adversarial. (iii) if each of them pick 50 boxes at random, than each prisioner has 1/2 probability of succeeding, so they take 2^{100} days in expectation, what is a lot. How to reduce it to about seven days?

Structure of random permutations

The solution has to do with the structure of a permutation drawn at random. Consider one of the n! permutation is sampled uniform at random. We can write each permutation as a product of disjoint cycles. For example, the permutation \sigma = \begin{pmatrix} 1&2&3&4&5&6\\4&5&6&3&2&1 \end{pmatrix} can be written as a product of two disjoint cycles: \sigma = ( 1,4,3,6 ) (2,5). In the language of boxes and prisoners, it is: box a_1 contains the name of prisoner a_2, then box a_2 contains the name of prisoner a_3 and so on, until box a_k contains name of prisoner a_1. This is the cycle (a_1, a_2, \hdots, a_k). If  all the cycles have length smaller than n/2 where n is the number of prisoners, then there is a simple strategy: we can consider that each prisoner corresponds to one box a priori (say prisoners have number 1, \hdots, n and boxes also have numbers 1, \hdots, n. The permutation  is define by the function \sigma from the box number to the number of the prisoner whose name is inside the box. Now, prisoner k opens box k, reads the name of the prisoner inside the box and the box with that number, then he looks at the number of the prisoner inside that other box and continues following the cycle.

The probability of success is the same of the probability that a permutation \pi drawn at random has no cycle of size greater than n/2. Let $\phi_p$ be the probability that a random permutation has a cycle of length p, where p > n/2. This is a simple exercise of combinatorics: we can have \begin{pmatrix} n \\ p \end{pmatrix} ways of picking the p elements that will form the p-cycle. There are (n-p)! permutations for the remaining n-p elements and (p-1)! cycles that can be formed with those p elements. So, we have:

\displaystyle \phi_p = \frac{1}{n!}  \begin{pmatrix} n \\ p \end{pmatrix} (n-p)! (p-1)! = \frac{1}{p}

And therefore the probability of having a cycle with length more than n/2 is \sum_{p=1+n/2}^n \frac{1}{p}. For n = 100, we get about 0.68, therefore the expected time before one permutation with no cycle larger than n/2 appears is \frac{1}{1-0.68} \approx 3.3 days. Much less than one week, actually!

Structure and Randomness

The solution to this puzzle explores the structure in randomness. A lot of work in computer science is based on looking at the structure (or expected structure) of a randomly chosen object. A very elegant kind of proof consists in creating a random process that generates object and prove that some object exists because is happens with positive probability. One simple, yet very beautiful example, is the following theorem: given any logic expression in 3-CNF form, i.e., one expression of the kind:

\displaystyle \bigwedge_i (a_{i1} \vee a_{i2} \vee a_{i3})

where a_{ij} \in \{ x_i, \tilde x_i\}, there is an assignment of x_i to \{0,1\} that satisfies 7/8 of the clauses. Even though the statement of this result has no probability in it, the proof is probabilitic. Consider a random assignment to the variables. Then each clause is satisfied with probability 7/8. Because of linearity of expectation, the expected number of satisfied clauses is 7/8 of the total. Since this is the expected number of satisfied clauses, there must be at least one assignments satisfying more than 7/8.

This kind of proof is called the probabilistic method. There are also a lot of other cool things exploring randomness in computer science, in design of algorithms, criptography, complexity theory, … Recently I read a nice blog post by Lipton, he discusses ways he would try to solve the P vs NP problem. One comment I found really interesting and exciting (but that I don’t quite understand yet) is that we could, maybe, separate the SAT-3 instances in “random” and “structured” instances and try to use different methods exploring randomness or structure in each of them.

\left(\begin{array}{ccc}1&2&3\\3&2&1\end{array}\ri  ght)

Categories: puzzles Tags: , ,
  1. No comments yet.
  1. No trackbacks yet.