## Random Spanning Trees

BigRedBits is again pleased to have Igor Gorodezky as a guest blogger directly from UCLA. I leave you with his excelent post on the Wilson’s algorithm.

——————————————

Igor again, with another mathematical dispatch from UCLA, where I’m spending the semester eating and breathing combinatorics as part of the 2009 program on combinatorics and its applications at IPAM. In the course of some reading related to a problem with which I’ve been occupying myself, I ran across a neat algorithmic result – Wilson’s algorithm for uniformly generating spanning trees of a graph. With Renato’s kind permission, let me once again make myself at home here at Big Red Bits and tell you all about this little gem.

The problem is straightforward, and I’ve essentially already stated it: given an undirected, connected graph , we want an algorithm that outputs uniformly random spanning trees of . In the early ’90s, Aldous and Broder independently discovered an algorithm for accomplishing this task. This algorithm generates a tree by, roughly speaking, performing a random walk on and adding the edge to every time that the walk steps from to and is a vertex that has not been seen before.

Wilson’s algorithm (D. B. Wilson, “Generating random spanning trees more quickly than the cover time,” STOC ’96) takes a slightly different approach. Let us fix a root vertex . Wilson’s algorithm can be stated as a *loop-erased random walk* on as follows.

Algorithm 1 (Loop-erased random walk)Maintain a tree , initialized to consist of alone. While there remains a vertex not in : perform a random walk starting at , erasing loops as they are created, until the walk encounters a vertex in , then add to the cycle-erased simple path from to .

We observe that the algorithm halts with probability 1 (its expected running time is actually polynomial, but let’s not concern ourselves with these issues here), and outputs a random directed spanning tree oriented towards . It is a minor miracle that this tree is in fact sampled *uniformly* from the set of all such trees. Let us note that this offers a solution to the original problem, as sampling randomly and then running the algorithm will produce a uniformly generated spanning tree of .

It remains, then, to prove that the algorithm produces uniform spanning trees rooted at (by which we mean directed spanning trees oriented towards ). To this we dedicate the remainder of this post.

**1. A “different” algorithm **

Wilson’s proof is delightfully sneaky: we begin by stating and analyzing a seemingly different algorithm, the *cycle-popping* algorithm. We will prove that this algorithm has the desired properties, and then argue that it is equivalent to the loop-erased random walk (henceforth LERW).

The cycle-popping algorithm works as follows. Given and , associate with each non-root vertex an infinite *stack* of neighbors. More formally, to each we associate

where each is uniformly (and independently) sampled from the set of neighbors of . Note that each stack is *not* a random walk, just a list of neighbors. We refer to the left-most element above as the *top* of , and by *popping* the stack we mean removing this top vertex from .

Define the *stack graph* to be the directed graph on that has an edge from to if is at the top of the stack . Clearly, if has vertices then is an oriented subgraph of with edges. The following lemma follows immediately.

Lemma 1Either is a directed spanning tree oriented towards or it contains a directed cycle.

If there is a directed cycle in we may pop it by popping for every . This eliminates , but of course might create other directed cycles. Without resolving this tension quite yet, let us go ahead and formally state the cycle-popping algorithm.

Algorithm 2 (Cycle-popping algorithm)Create a stack for every . While contains any directed cycles, pop a cycle from the stacks. If this process ever terminates, output .

Note that by the lemma, if the algorithm ever terminates then its output is a spanning tree rooted at . We claim that the algorithm terminates with probability 1, and moreover generates spanning trees rooted at uniformly.

To this end, some more definitions: let us say that given a stack , the vertex is at *level* . The level of a vertex in a stack is static, and is defined when the stack is created. That is, the level of does not change even if advances to the top of the stack as a result of the stack getting popped.

We regard the sequence of stack graphs produced by the algorithm as *leveled* stack graphs: each non-root vertex is assigned the level of its stack. Observe that the level of in is the number of times that has been popped. In the same way, we regard cycles encountered by the algorithm as leveled cycles, and we can regard the tree produced by the algorithm (if indeed one is produced) as a leveled tree.

The analysis of the algorithm relies on the following key lemma (Theorem 4 in Wilson’s paper), which tells us that the order in which the algorithm pops cycles is irrelevant.

Lemma 2For a given set of stacks, either the cycle-popping algorithm never terminates, or there exists a unique leveled spanning tree rooted at such that the algorithm outputs irrespective of the order in which cycles are popped.

*Proof:* Fix a set of stacks . Consider a leveled cycle that is pop-able, i.e.~there exist leveled cycles that can be popped in sequence. We claim that if the algorithm pops any cycle not equal to , then there still must exist a series of cycles that ends in and that can be popped in sequence. In other words, if is pop-able then it remains pop-able, no matter which cycles are popped, until itself is actually popped.

Let be a cycle popped by the algorithm. If then the claim is clearly true. Also, if shares no vertices with , then the claim is true again. So assume otherwise, and let be the first in the series to share a vertex with . Let us show that by contradiction.

If , then and must share a vertex that has different successors in and . But by definition of , none of the contain , and this implies that has the same level in and . Therefore its successor in both cycles is the same, a contradiction. This proves .

Moreover, the argument above proves that and are equal as *leveled* cycles (i.e.~every vertex has the same level in both cycles). Hence

is a series of cycles that can be popped in sequence, which proves the original claim about .

We conclude that given a set of stacks, either there is an infinite number of pop-able cycles, in which case there will always be an infinite number and the algorithm will never terminate, or there is a finite number of such cycles. In the latter case, every one of these cycles is eventually popped, and the algorithm produces a spanning tree rooted at . The level of each non-root vertex in is given by (one plus) the number of popped cycles that contained .

Wilson summarizes the cycle-popping algorithm thusly: “[T]he stacks uniquely define a tree together with a partially ordered set of cycles layered on top of it. The algorithm peels off these cycles to find the tree.”

Theorem 3The cycle-popping algorithm terminates with probability 1, and the tree that it outputs is a uniformly sampled spanning tree rooted at .

*Proof:* The first claim is easy: has a spanning tree, therefore it has a directed spanning tree oriented towards . The stacks generated in the first step of the algorithm will contain such a tree, and hence the algorithm will terminate, with probability 1.

Now, consider a spanning tree rooted at . We’ll abuse notation and let be the event that is produced by the algorithm. Similarly, given a collection of leveled cycles , we will write for the event that is the set of leveled cycles popped by the algorithm before it terminates. Finally, let be the event that the algorithm popped the leveled cycles in and terminated, with the resulting *leveled* tree being equal to .

By the independence of the stack entries, we have , where is the probability that the algorithm’s output is a leveled version of , a quantity which a moment’s reflection will reveal is independent of . Now,

which, as desired, is independent of .

**2. Conclusion **

We have shown that the cycle-popping algorithm generates spanning trees rooted at uniformly. It remains to observe that the LERW algorithm is nothing more than an implementation of the cycle-popping algorithm! Instead of initially generating the (infinitely long) stacks and then looking for cycles to pop, the LERW generates stack elements as necessary via random walk (computer scientists might recognize this as the Principle of Deferred Decisions). If the LERW encounters a loop, then it has found a cycle in the stack graph induced by the stacks that the LERW has been generating. Erasing the loop is equivalent to popping this cycle. We conclude that the LERW algorithm generates spanning trees rooted at uniformly.