Archive

Posts Tagged ‘approximation algorithms’

Bounded Degree Spanning Tree and an Uncrossing Lemma

November 17th, 2009 No comments

I’ve been reading about the Bounded Degree Spanning Tree problem and I thought of writing some of what I am learning here. It illustrates a beautiful techique called Iterated Rounding and uses the combinatorial idea of uncrossing. I’ll try to give a high-level idea of the argument and give references on the details. The first result of this kind was given by Goemans (although there were previous results with weaker guarantees) by Goemans in Minimum Bounded Degree Spanning Trees, but the result based on iterated rounding and a subsequent improvement are due to Singh and Lau in a serie of papers. A main reference is Approximating minimum bounded degree spanning trees to within one of optimal.

The problem of bounded degree spanning tree is as follows: consider a graph {G = (V,E)} with edge weights and we for some nodes {W \subseteq V} a degree bound {b_v, v \in W}. We want to find, among the spanning trees for which the degree of {v} is {\leq b_v} the one with minimum cost. It is clearly a hard problem, since taking all weights equal to {1} and {b_v = 2} for all nodes is the Hamiltonian Path problem, which is NP-complete. We will get a different kind of approximation. Let OPT be the optimal solution: we will show an algorithm that gives a spanning tree of cost {\leq OPT} such that each node {v} has degree {\leq b_v + 2} (this can be improved to {b_v + 1} with a more sofisticated algorithm, also based on Iterated Rounding).

As always, the first step to design an approximation algorithm is to relax it to an LP. We consider the following LP:

\displaystyle \left. \begin{aligned} & \min \sum_{e\in E} c_e x_e \text{ s.t. } \\ & \qquad \left. \begin{aligned} & \sum_{e \in E} x_e = \vert V \vert - 1 \\ & \sum_{e \in E(S)} x_e \leq \vert S \vert - 1 & \forall S \subseteq V, \vert S \vert \geq 2 \\ & \sum_{e \in \delta(v)} x_e \leq b_v & \forall v \in W\\ & x_e \geq 0 & \forall e \in E \end{aligned} \right. \end{aligned} \right.

The first constraint expresses that in a spanning tree, there are at most {\vert V \vert - 1} edges, the second prevent the formation of cycles and the third guarantees the degree bounds. For {W = \emptyset} we have the standard Minimal Spanning Tree problem and for this problem the polytope is integral. With the degree bounds, we lose this nice property. We can solve this LP using the Ellipsoid Method. The separation oracle for the {\sum_{e \in E(S)} x_e \leq \vert S \vert - 1} is done by a flow computation.

Iterated Rounding

Now, let’s go ahead and solve the LP. It would be great if we had an integral solution: we would be done. It is unfortunately not the case, but we can still hope it is almost integral in some sense: for example, some edges are integral and we can take them to the final solution and recurse the algorithm on a smaller graph. This is not far from truth and that’s the main idea of the iterated rounding. We will show that the support of the optimal solution {E(x) = \{e \in E; x_e > 0\}} has some nice structure. Consider the following lemma:

Lemma 1 For any basic solution {x} of the LP, either there is {v} with just one incident edge in the support {E(x)} or there is one {v \in W} such that that at most {3} edges are incident to it.

If we can prove this lemma, we can solve the problem in the following way: we begin with an empty tree: then we solve the LP and look at the support {E(x)}. There are two possibilities according to the lemma:

  • If there is one node {v} with just one edge {e = (u,v)} incident to it in the support, we add it to the tree, remove {v} from {V}, decrease {b_u}, make {E = E(x)} (the trick is to remove in each iteration edges from {E} that are not in the support. Clearly, removing those edges doesn’t hurt the objective value) and run the algorithm again. Notice that the LP called in the recursion has value less or equal then the actual LP {- c_e}. So if by induction we get a spanning tree respecting the new degree bounds plus two and value less or equal than the new LP value, we can just add {e} and we have a solution with value less or equal than the one of the original LP respecting the degree bounds plus two.
  • Otherwise, there is one node {v \in W} that has degree {3} in the support. So, we just remove that degree bound on that vertex (i.e. remove {v} from {W}), make {E = E(x)} (again,eliminate the edges not in the support) and run the algorithm again. Clearly, if one node is still in {W}, it has {b_v \geq 1}, since there are only three edges in the support, there will be for the rest of the computation, just three edges incident to it, so there will be at most three edges more incident to it. So it will exceed its original {b_v} by at most {2}.

The algorithm eventually stops, since in each iteration we have less edges or less nodes in {W} and the solution is as desired. The main effort is therefore to prove the lemma. But before, let’s look at the lemma: it is of the following kind: “any basic solution of the LP has some nice properties, which envolve having a not too big (at least in some point) support”. So, it involves proving that the support is not too large. That is our next task as we are trying to prove the lemma. And we will be done with:

Theorem 2 The algorithm described above produces a spanning tree of cost {\leq Z^*} (the LP values and therefore {\leq OPT})in which each node {v \in W} has degree {\leq b_v + 2}.

Bounding the size of the support

We would like now to prove some result like the Lemma above: that in the solution of the LP we have either one {v} with degree {1} in {E(x)} or we have a node in {W} with degree {\leq 3}. First, we suppose the opposite, that {(V,E(x))} has all the nodes with degree {\geq 2} and all the nodes in {W} have degree {\geq 4}. This implies that we have a large number of edges in the support. From the degrees, we know that:

\displaystyle \vert E(x) \vert \geq \frac{1}{2} \left( 2( \vert V \vert - \vert W \vert ) + 4 \vert W \vert \right) = \vert V \vert + \vert W \vert

We want to prove that the support {\vert E(x) \vert} of the LP can’t be too large. The first question is: how to estimate the size of the support of a basic solution. The constraints look like that:

bst_fig1

A basic solution can be represented by picking {\vert E \vert} rows of the matrix and making them tight. So, if we have a general {Ax \leq b} LP, we pick some submatrix {A'} of {A} which is {n \times n} and the basic solution is just {x = A'^{-1} b'}. The lines of matrix {A'} can be of three types: they can be {\chi_S}, which are corresponding to {\sum_{e \in E(S)} x_e \leq \vert S \vert - 1}, {\chi_v} that correspond to {\sum_{e \in \delta(v)} x_e \leq b_v} or {\delta_e} corresponding to {x_e \geq 0}. There are {\vert E \vert} vectors in total. The size of the support {E(x)} is smaller or equal the number of rows of the form {\chi_S, \chi_v} in the basic solution. Therefore the idea to bound the size of the support is to prove that “all basic solutions can be represented by a small number of rows in the form {\chi_S, \chi_v}. And this is done using the following:

Lemma 3 Assuming {E = E(x)}, for any basic solution {x}, there is {Z \subseteq W} and a family {\mathcal{S}} of sets such that:

  1. The restrictions correspondent to {S \in \mathcal{S}} and {v \in Z} are tight for {x}
  2. {\{ \chi_S; S \in \mathcal{S} \} \cup \{ \chi_v; v \in Z \}} is an independent set
  3. {\vert \mathcal{S} \vert + \vert Z \vert = \vert E(x) \vert}
  4. {\mathcal{S}} is a laminar family

The first 3 items are straightfoward properties of basic solutions. The fourth one, means that for two sets {S_1, S_2 \in \mathcal{S}}, one of three things happen: {S_1 \subseteq S_2}, {S_2 \subseteq S_1} or {S_1 \cap S_2 = \emptyset}. Now, we based on the previous lemma and in the following result that can be easily proved by induction, we will prove Lemma 1.

Lemma 4 If {\mathcal{S}} is a laminar family over the set {V} where each set {S \in \mathcal{S}} contains at least {2} elements, then {\vert \mathcal{S} \vert \leq \vert V \vert - 1}.

Now, the proof of Lemma 1 is easy. Let’s do it and then we come back to prove Lemma 3. Simply see that {\vert E(x) \vert = \vert \mathcal{S} \vert + \vert Z \vert \leq \vert V \vert - 1 + \vert W \vert} what contradicts {\vert E(x) \vert \geq \vert V \vert + \vert W \vert}.

Uncrossing argument

And now we arrive in the technical heart of the proof, which is proving Lemma 3. This says that given any basic solution, given any feasible solution, we can write it as a “structured” basic solution. We start with any basic feasible solution. This already satifies (1)-(3), then we need to change that solution to satisfy condition (4) as well. We need to get rid crossing elements, i.e., {S,T \in \mathcal{S}} in the form:

bst_fig2

We do that by the means of the:

Lemma 5 (Uncrossing Lemma) If {S} and {T} are intersecting and tight (tight in the sense that their respective constraint is tight), then {S \cup T} and {S \cap T} are also tight and:

\displaystyle \chi_{S \cap T} + \chi_{S \cup T} = \chi_S + \chi_T

Which corresponds to that picture:

bst_fig3

Proof: First, we note that {x(E(S))} is a supermodular function, i.e.:

\displaystyle x(E(S)) + x(E(T)) \leq x(E(S \cap T)) + x(E(S \cup T))

We can see that by case analysis. Every edge appearing in the left side appears in the right side with at least the same multiplicity. Notice also that it holds with strict inequality iff there are edges from {S\setminus T} to {T \setminus S}. Now, we have:

\displaystyle \begin{aligned} (\vert S \vert - 1) + (\vert T \vert - 1) & = (\vert S \cap T \vert - 1) + (\vert S \cup T \vert - 1) \geq \\ & \geq x(E(S \cap T)) + x(E(S \cup T)) \geq \\ & \geq x(E(S)) + x(E(T)) = (\vert S \vert - 1) + (\vert T \vert - 1) \end{aligned}

where the first relation is trivial, the second is by feasibility, the third is by supermodularity and the lastone is by tightness. So, all hold with equality and therefore {S \cap T} and {S \cup T} are tight. We also proved that:

\displaystyle x(E(S \cap T)) + x(E(S \cup T)) = x(E(S)) + x(E(T))

so there can be no edge from {S\setminus T} to {T \setminus S} in {E(x)} and therefore, thinking just of edges in {E(x)} we have:

\displaystyle \chi_{S \cap T} + \chi_{S \cup T} = \chi_S + \chi_T

\Box

Uncrossing arguments are found everywhere in combinatorics. Now, we show how the Uncrossing Lemma can be used to prove Lemma 1:

Proof: Let {x} be any basic solution. It can be represented by a pair {(Y, \mathcal{C})} where {Y \subseteq W} and {\mathcal{C}} is a family of sets. We will show that the same basic solution can be represented by {(Y, \mathcal{L})} where {\mathcal{L}} is a laminar family and has the same size of {\mathcal{C}}.

Let {\mathcal{S}} be all sets that are tight under {x} and {\mathcal{L}} a maximal laminar family of tights sets in {\mathcal{S}}, such that {\{\chi_S; S \in \mathcal{L} \} \cup \{\chi_v; v \in Z \}} are independent. I claim that {\vert \mathcal{L} \vert = dim(span(\mathcal{S}))}.

In fact, suppose {\vert \mathcal{L} \vert < dim(span(\mathcal{S}))}, then there are sets of {\mathcal{S}} we could add to {\mathcal{L}} without violating independence – the problem is that those sets would cross some set. Pick such {S \in \mathcal{S}} intersecting fewer possible sets in {\mathcal{L}}. The set {S} intersects some {T \in \mathcal{L}}. Since both are tight we can use the Uncrossing Lemma and we get:

\displaystyle \chi_{S \cap T} + \chi_{S \cup T} = \chi_S + \chi_T

since {\chi_S \notin span(\mathcal{L})}, we can’t have simultaneously {\chi_{S \cap T}} and {\chi_{S \cup T}} in {span(\mathcal{L})}. Let’s consider two cases:

  1. {\chi_{S \cap T} \notin span(\mathcal{L})}, then {S \cap T \in span(\mathcal{S})} is in {span(\mathcal{S}) \setminus span(\mathcal{L})} and intersects fewer sets of {\mathcal{L}} than {S}, since all sets that intersect {S \cap T} in {\mathcal{L}} must intersect {S} as well (since no set can cross {T}).bst_fig4
  2. {\chi_{S \cup T} \notin span(\mathcal{L})}, then {S \cup T \in span(\mathcal{S})} is in {span(\mathcal{S}) \setminus span(\mathcal{L})} and intersects fewer sets of {\mathcal{L}} than {S}, since all sets that intersect {S \cup T} in {\mathcal{L}} must intersect {S}.

In either case we have a contradiction, so we proved that {\vert \mathcal{L} \vert = dim(span(\mathcal{S}))}. So we can generate all the space of tight sets with a laminar family. \Box

And this finishes the proof. Let’s go over all that we’ve done: we started with an LP and we wanted to prove that the support of each solution was not too large. We wanted that because we wanted to prove that there was one node with degree one in the support or a node in {W} with small ({\leq 3}) degree. To prove that the degree of the support is small, we show that any basic solution has a representation in terms of a laminar family. Then we use the fact that laminar families can’t be very large families of sets. For that, we use the celebrated Uncrossing Lemma.

Note: Most of this is based on my notes on David Williamson’s Approximation Algorithms class. I spent some time thinking about this algorithm and therefore I decided o post it here.

Minimum average cost cycle and TSP

September 2nd, 2009 No comments

After some time, I did again Code Jam – well, not again, this is the first time I do Code Jam, but there is a while I don’t do Programming Competitions. Back in my undergrad I remember all the fun I had with my ACM-ICPC team solving problems and discussing algorithms problems. Actually, ICPC was what made me interested in Algorithms and Theory of Computing for the first time. I was remembering that not only because Code Jam because I came across a nice problem whose solution I learned in programming competitions, specifically a technique I learned to solve this problem.

Let’s formulate the problem in a more abstract way: Given a graph {G = (V,E)} and two functions: a cost function {c:E \rightarrow {\mathbb R}_+} and a benefit function {b:E \rightarrow {\mathbb R}_+}, we define the cost-benefit of a set of edges as {\frac{b(S)}{c(S)}}. Now, consider those two questions:

Question I: Find the spanning tree of maximum (minimum) cost-benefit.

Question II: Find the cycle of maximum (minimum) cost-benefit.

The solution of those uses binary search. If we can answer the following query: given {\beta > 0}, is there a cycle (spanning tree) of cost-benefit smaller (larger) than {\beta}? We either state there is no such tree (cycle) or exhibit that. How can we solve this? It is simple: consider the graph {G} with edge weights given by {b_e - \beta c_e}. Then there is a cycle (spanning tree) of cost benefit {\leq \beta} if and only if there is a cycle (spanning tree) in this graph with transformed weights with negative total weight. Finding a cycle with negative weight is easy and can be done, for example, using Bellman Ford’s algorithm. Finding a spanning tree with negative weights can be done using any minimal spanning tree algorithm, as Kruskal, Prim or Boruvka.

Taking {c(e) = 1} for all {e \in E} we can find using binary search, the cycle with smallest average length, i.e., smallest {b(C) / \vert C \vert} where {\vert C \vert} is the number of edges in the cycle.

Asymmetric Travelling Salesman Problem

We can use this trick just described to design an {O(\log n)}-approximation to the asymmetric TSP problem. Consider we have {n} nodes in {V} and a function {c: V \times V \rightarrow {\mathbb R}_+}, not necessarily symmetric, such that the triangular inequality holds, i.e., {c(i,j) \leq c(i,k) + c(k,j)}. A TSP tour is an ordering {\pi: \{1, \hdots, n\} \rightarrow V} and has total cost:

\displaystyle c(\pi) = \sum_{j=1}^n c(\pi_j, \pi_{j+1})

where {\pi_{n+1} = \pi_1}. Let OPT be the cost of the optimal tour. It is NP-complete to calculate the optimal, but consider the following approximation algorithm: find the cycle with smallest average cost. Then remove all the nodes in that cycle except one, in the remaining graph find again the cycle of smallest average cost and remove all nodes except one. Continue doing that until there is just one node left. Taking all those cycles together, we have a strongly connected Eulerian graph (in-degrees are equal to out-degrees) for each node). I claim that the total weight of edges {E'} in this Eulerian graph is:

\displaystyle c(E') \leq 2 \mathcal{H}_n \cdot OPT

where {\mathcal{H}_n = \sum_{j=1}^n \frac{1}{j} = O(\log n)} is the harmonic number. Now, since we have this graph we can find an Eulerian tour and transform it into a TSP tour shortcutting when necessary (triangle inequality guarantees that shortcutting doesn’t decrease the cost of the tour). So, we just need to prove the claim.

In fact, it is not hard to see that after removing some nodes, the optimal tour is still {\leq OPT}, where {OPT} is the tour of smallest cost for all nodes. To see this, just take the original tour and shortcut it, for example, if the original tour passed through a sequence of nodes {i_1, \hdots, i_p} but nodes {i_2, \hdots, i_{p-1}} then by triangle inequality:

\displaystyle c(i_1, i_p) \leq \sum_{j=1}^{p-1} c(i_j, i_{j+1})

so we can just substitute the edges {(i_1, i_2), \hdots, (i_{p-1}, i_p)} by {(i_1, i_p)}. Now, suppose we do {k} iterations and in the beginning of the {j^{th}} iteration there are {n_j} nodes left. So, clearly the average length of the cycle we picked in the algorithm is {\leq \frac{OPT}{n_j}} and therefore, if {C_1, \hdots C_k} are the cycles chosen, we have:

\displaystyle c(E') = \sum_{j=1}^k c(C_j) \leq \sum_{j=1}^k \frac{OPT}{n_j} (n_j - n_{j+1} + 1)

since:

\displaystyle \frac{n_j - n_{j+1} + 1}{n_j} \leq \frac{1}{n_j} + \frac{1}{n_j - 1} + \hdots + \frac{1}{n_{j+1}}

we plug those two expressions together and we get the claim.

Consistent Labeling

August 3rd, 2009 No comments

This week, me and Igor are presenting the paper “Metric clustering via consistent labeling” by Krauthgamer and Roughgarden in our Algorithms Reading Group. To prepare for the presentation, I thought that writting a blog post about it was a nice idea. Consistent Labeling is a framework that allows us to represent a variety of metric problems, as computing a separating decomposition of a metric space, a padded decomposition (both which are the main ingredient of embedding metric spaces into dominating trees), sparse cover, metric triangulation and so on. Here, I’ll define the Consistent Labeling Problem, formulate it as an Integer Program and show how we can get an approximation algorithm to it by rounding its linear relaxation.

First, consider a base set {A}. We want to attribute labels in {L} to the elements of {A} respecting some constraints. First, each element {a \in A} has associated with it a subset {L_a \subseteq L} of labels it can be assigned. Each element can receive at most {k} labels. Second, there is a collection {\mathcal{C}} of subsets of {A}, so that for each {S \in \mathcal{C}}, there must be one label {i \in L} that is assigned to all elements in {S}. For each element in {S \in \mathcal{C}}, there is a penalty {w_S} to violate this constraint.

Our goal is to find a probability distribution over labelings that minimizes the total penalty {\sum_{S \in \mathcal{C}} w_S Pr[S \text{ is violated}]}. Let’s formulate this is an integer program. For that, the first thing we need are decision variables: let {x_{ai}} be a {{0,1}}-variable indicating if label {i \in L} is assigned to element {a \in A}. Let the variable {y_{iS}} mean that the label {i} was assigned to all elements in {S} and let {z_S} mean that set {S} is consistently labeled. A linear programming formulation can be therefore expressed as:

\displaystyle  \left. \begin{aligned} & \min \sum_{S \in \mathcal{C}} w_S z_S \text{ s.t. } \\ & \qquad \left\lbrace \begin{aligned} & \sum_{i \in L} x_{ia} \leq k & \forall a \in A \\ & y_{iS} \leq x_{ia} & \forall S \in \mathcal{C}, a \in S, i \in L\\ & z_S \leq \sum_{i \in L} y_{iS} & \forall S \in \mathcal{C} \\ & z_S \leq 1 & \forall S \in \mathcal{C} \\ & x_{ia} = 0 & \forall i \notin L_a \\ \end{aligned} \right. \end{aligned} \right. \ \ \ \ \ (1)

It is not hard to see that if {x,y,x} are {\{0,1\}}-variables then the formulation corresponds to the original problem. Now, let’s relax it to a Linear Program and interpret those as probabilities. The rounding procedure we use is a generalization of the one in “Approximation algorithms for classification problems” of Kleinberg and Tardos: until every object has {k} labels, repeat the following procedure: pick {i \sim \text{Uniform}(L)} and {t \sim \text{Uniform}([0,1])} and assign label {i} to all objects with {x_{ai} > t}. In the end, pick the {k} first labels assigned to each object.

What we just described is a procedure that, given a solution to the relaxed LP, produces a randomized labeling of the objects. Now, we need to prove that the solution produced by this randomized labeling is good in expectation, that is, that {\sum_{S \in \mathcal{C}} w_S Pr[S \text{ is violated}]} is good compared to the optimal single deterministic assignment. The authors prove that they are within a factor of {2 f_{max}}, where {f_{max} = \max_{S \in \mathcal{C}} \vert S \vert}.

Theorem 1 For each {S \in \mathcal{C}}, the probability that {S} is consistently labeled is lower bounded by {1 - \left( 1 - \frac{z_S}{k \vert S \vert} \right)^k}.

Proof: Since we are trying to lower bound the probability of getting consistently labeled, we can consider just the probability of all the elements in {S} to be consistently labeled in the same iteration – let’s estimate this probability. This is:

\displaystyle q = \sum_{i \in L} \frac{1}{\vert L \vert} Pr[S \text{is all labeled with } i \text{ in iteration } j]

If {i \in L} is chosen in iteration {j}, all elements are labeled with {i} if {t < x_{ia}} for all {a \in S}, so the probability is {\min_{a \in S} x_{ia} \geq y_{iS}}. So, we have:

\displaystyle q = \sum_{i \in L}\sum_{i \in L} \frac{1}{\vert L \vert} y_{iS} = \frac{z_S}{\vert L \vert}

Now, let {p} be the probability that set {S} is hit by the labeling in phase {j}. If label {i \in L} is chosen, the set {S} is hit by the labeling if {t < \max_{a \in S} x_{ia}}, therefore:

\displaystyle p = \sum_{i \in L} \frac{1}{\vert L \vert} \max_{a \in S} x_{ia} \leq \sum_{i \in L} \frac{1}{\vert L \vert} \sum_{a \in S} x_{ia}

inverting the order of the summation, we get:

\displaystyle p \leq \frac{1}{\vert L \vert} \sum_{a \in S} \sum_{i \in L} x_{ia} \leq \frac{1}{\vert L \vert} \sum_{a \in S} k = \frac{k \vert S \vert}{\vert L \vert}

The probability that {S} is consistently labeled is smaller greater than the probability that it is consistently labeled in the same iteration before the set is hit {k} times. In one iteration three things may happen: either the set is not hit, or it is hit but it is not consistently labeled or it is consistently labeled. The figure below measures how many times the set is hit. The down arrows represent the event that the set {S} is consistently labeled:

consist_lab

A standard trick in this cases is to disregard the self-loops and normalize the probabilities. This way the probability that {S} is consistently labeled is:

\displaystyle \frac{q}{p} \sum_{j=0}^{k-1} \left( \frac{p-q}{p} \right)^j = \frac{q}{p} \cdot \frac{ 1 - \left( \frac{p-q}{p} \right)^k }{1 - \frac{p-q}{p} } = 1 - \left( 1 - \frac{q}{p} \right)^k

Now, we just use that {p \leq \frac{k \vert S \vert}{\vert L \vert}} and {q = \frac{z_S}{\vert L \vert}} to obtain the desired result. \Box

The approximation factor of {2 f_{max}} follows straight from the previous theorem by considering the following inequalities:

\displaystyle \left( 1 - \frac{a}{k}\right)^k \leq e^{-a} \leq 1 - \frac{a}{2}

and noting that the LP solution is a lower bound of the optimal value

Theorem 2 The rounding procedure gives a randomized algorithm that, in expectation, achieves a {2 f_{max}} approximation to the optimal consistent labeling.