Home > theory > Consistent Labeling

Consistent Labeling

This week, me and Igor are presenting the paper “Metric clustering via consistent labeling” by Krauthgamer and Roughgarden in our Algorithms Reading Group. To prepare for the presentation, I thought that writting a blog post about it was a nice idea. Consistent Labeling is a framework that allows us to represent a variety of metric problems, as computing a separating decomposition of a metric space, a padded decomposition (both which are the main ingredient of embedding metric spaces into dominating trees), sparse cover, metric triangulation and so on. Here, I’ll define the Consistent Labeling Problem, formulate it as an Integer Program and show how we can get an approximation algorithm to it by rounding its linear relaxation.

First, consider a base set {A}. We want to attribute labels in {L} to the elements of {A} respecting some constraints. First, each element {a \in A} has associated with it a subset {L_a \subseteq L} of labels it can be assigned. Each element can receive at most {k} labels. Second, there is a collection {\mathcal{C}} of subsets of {A}, so that for each {S \in \mathcal{C}}, there must be one label {i \in L} that is assigned to all elements in {S}. For each element in {S \in \mathcal{C}}, there is a penalty {w_S} to violate this constraint.

Our goal is to find a probability distribution over labelings that minimizes the total penalty {\sum_{S \in \mathcal{C}} w_S Pr[S \text{ is violated}]}. Let’s formulate this is an integer program. For that, the first thing we need are decision variables: let {x_{ai}} be a {{0,1}}-variable indicating if label {i \in L} is assigned to element {a \in A}. Let the variable {y_{iS}} mean that the label {i} was assigned to all elements in {S} and let {z_S} mean that set {S} is consistently labeled. A linear programming formulation can be therefore expressed as:

\displaystyle  \left. \begin{aligned} & \min \sum_{S \in \mathcal{C}} w_S z_S \text{ s.t. } \\ & \qquad \left\lbrace \begin{aligned} & \sum_{i \in L} x_{ia} \leq k & \forall a \in A \\ & y_{iS} \leq x_{ia} & \forall S \in \mathcal{C}, a \in S, i \in L\\ & z_S \leq \sum_{i \in L} y_{iS} & \forall S \in \mathcal{C} \\ & z_S \leq 1 & \forall S \in \mathcal{C} \\ & x_{ia} = 0 & \forall i \notin L_a \\ \end{aligned} \right. \end{aligned} \right. \ \ \ \ \ (1)

It is not hard to see that if {x,y,x} are {\{0,1\}}-variables then the formulation corresponds to the original problem. Now, let’s relax it to a Linear Program and interpret those as probabilities. The rounding procedure we use is a generalization of the one in “Approximation algorithms for classification problems” of Kleinberg and Tardos: until every object has {k} labels, repeat the following procedure: pick {i \sim \text{Uniform}(L)} and {t \sim \text{Uniform}([0,1])} and assign label {i} to all objects with {x_{ai} > t}. In the end, pick the {k} first labels assigned to each object.

What we just described is a procedure that, given a solution to the relaxed LP, produces a randomized labeling of the objects. Now, we need to prove that the solution produced by this randomized labeling is good in expectation, that is, that {\sum_{S \in \mathcal{C}} w_S Pr[S \text{ is violated}]} is good compared to the optimal single deterministic assignment. The authors prove that they are within a factor of {2 f_{max}}, where {f_{max} = \max_{S \in \mathcal{C}} \vert S \vert}.

Theorem 1 For each {S \in \mathcal{C}}, the probability that {S} is consistently labeled is lower bounded by {1 - \left( 1 - \frac{z_S}{k \vert S \vert} \right)^k}.

Proof: Since we are trying to lower bound the probability of getting consistently labeled, we can consider just the probability of all the elements in {S} to be consistently labeled in the same iteration – let’s estimate this probability. This is:

\displaystyle q = \sum_{i \in L} \frac{1}{\vert L \vert} Pr[S \text{is all labeled with } i \text{ in iteration } j]

If {i \in L} is chosen in iteration {j}, all elements are labeled with {i} if {t < x_{ia}} for all {a \in S}, so the probability is {\min_{a \in S} x_{ia} \geq y_{iS}}. So, we have:

\displaystyle q = \sum_{i \in L}\sum_{i \in L} \frac{1}{\vert L \vert} y_{iS} = \frac{z_S}{\vert L \vert}

Now, let {p} be the probability that set {S} is hit by the labeling in phase {j}. If label {i \in L} is chosen, the set {S} is hit by the labeling if {t < \max_{a \in S} x_{ia}}, therefore:

\displaystyle p = \sum_{i \in L} \frac{1}{\vert L \vert} \max_{a \in S} x_{ia} \leq \sum_{i \in L} \frac{1}{\vert L \vert} \sum_{a \in S} x_{ia}

inverting the order of the summation, we get:

\displaystyle p \leq \frac{1}{\vert L \vert} \sum_{a \in S} \sum_{i \in L} x_{ia} \leq \frac{1}{\vert L \vert} \sum_{a \in S} k = \frac{k \vert S \vert}{\vert L \vert}

The probability that {S} is consistently labeled is smaller greater than the probability that it is consistently labeled in the same iteration before the set is hit {k} times. In one iteration three things may happen: either the set is not hit, or it is hit but it is not consistently labeled or it is consistently labeled. The figure below measures how many times the set is hit. The down arrows represent the event that the set {S} is consistently labeled:

consist_lab

A standard trick in this cases is to disregard the self-loops and normalize the probabilities. This way the probability that {S} is consistently labeled is:

\displaystyle \frac{q}{p} \sum_{j=0}^{k-1} \left( \frac{p-q}{p} \right)^j = \frac{q}{p} \cdot \frac{ 1 - \left( \frac{p-q}{p} \right)^k }{1 - \frac{p-q}{p} } = 1 - \left( 1 - \frac{q}{p} \right)^k

Now, we just use that {p \leq \frac{k \vert S \vert}{\vert L \vert}} and {q = \frac{z_S}{\vert L \vert}} to obtain the desired result. \Box

The approximation factor of {2 f_{max}} follows straight from the previous theorem by considering the following inequalities:

\displaystyle \left( 1 - \frac{a}{k}\right)^k \leq e^{-a} \leq 1 - \frac{a}{2}

and noting that the LP solution is a lower bound of the optimal value

Theorem 2 The rounding procedure gives a randomized algorithm that, in expectation, achieves a {2 f_{max}} approximation to the optimal consistent labeling.

  1. No comments yet.
  1. No trackbacks yet.