Branch-and-cut

From Cophy
Jump to: navigation, search

Contents

Introduction

In this article we descibe the basic concepts of a branch-and-cut algorithm as it can be used for determining optimum solutions for max-cut problems and exact ground states of hard Ising spin glass instances, cf. exact ground states of Ising spin glasses. Branch-and-cut methods are often successful for determining exact solutions of hard combinatorial optimization problems, such as the travelling salesman problem or the linear ordering problem. For each instance (sample), the method always maintains an upper (ub) and a lower bound (lb) for the optimum solution value. Iteratively, upper and lower bounds are improved until they coincide at the optimum solution value or are tight enough for proving optimality, see Figure Bounds. Branch-and-cut is a specialized branch-and-bound method where the bounds are determined through methods from linear programming and polyhedral theory.

Figure Bounds

In the following, we consider maximization problems. However, everything can easily be translated to work for minimization problems, too. In a heuristic or approximate method, only the lower bound is available and therefore no quality guarantee on a solution is known. The existence of both upper and lower bound as they are available in branch-and-cut, marks the difference between an approximate and an exact solution method. Any solution (not necessarily optimum) yields a lower bound on the optimum. In the following, we explain the determination of upper bounds (ub) in more detail. They are determined via Relaxations.

Determination of bounds through relaxations

Here, we explain the algorithmic ideas specifically for the max-cut problem. The latter appears in the determination of ground states of Ising spin glasses. In formulas, we want to solve the problem

max\ \sum\limits_{e\ \in\ \delta(W)} c_eW\ \subseteq\ V

It is not hard to see that the vertices of the cut polytope PC(G) are exactly the incidence vectors of the cuts of G. Instead of only optimizing the objective over all cuts, we can also optimize it over the whole cut polytope and also end up with an optimum cut. The latter can be achieved by e.g., solving the linear optimization problem with the simplex algorithm.
Unfortunately, we don't know how to input 'x ∈ PC(G)' efficiently into a computer program. However, from theorems by Minkowski and Weyl we know that there exists a matrix A and a vector b so that any polytope P can be written as the solution set of the inequality system Ax ≤ b. Therefore, also for the cut polytope we know there exists a description as

P_C(G) = \{x|\ Ax\ \le\ b\}\,\!.

However, for practical problems the number of needed inequalities, i.e., the number of rows in A, to determine the cut polytope is too large to be generated explicitely. In fact, already for the complete graph on 7 nodes, the number of necessary inequalities (facets) is 116,764.
One might ask why the cut polytope is of interest despite the fact that the number of its facets grows so fast. The reason is that we can make use of partial linear descriptions (Relaxation) of a polytope within the branch-and-cut approach.


Branch-and-cut algorithm

We informally summarize its basic concepts. We need a partial description P of PC(G) with the properties that the latter is contained in P. We call such a polytope a relaxation.
Furthermore, the inequality system APx ≤ bP describing P is known and can be generated 'fast', i.e., in polynomial time. By optimizing over the superset P instead over PC(G) we obtain an upper bound (ub) on the value of the maximum cut. Optimizing over P amounts to solving a linear program (lp) which is of the form


\begin{matrix}
max & c^Tx \\
A_Px & \le\ b_P \\
x & \ge\ 0
\end{matrix}

Fast algorithms for solving linear programs exist, e.g., the simplex method.
Within the branch-and-cut approach, we start with some relaxation P of PC(G). Iteratively, we generate tighter desciptions P of the cut polytope. A lower bound (lb) on the optimum value of the maximum cut can be obtained using any heuristic generating a cut in a graph. In case upper and lower bounds coincide or the upper bound solution vector represents a cut, we can stop and return an optimum solution. However, it is possible that we neither can generate an optimum solution nor a better description P ⊇PC(G). In this case we branch. In a branching step we select a variable xij for (i,j) ∈ E that is neither zero nor one in the upper bound solution vector and generate two sub problems in one of which xij is set to 0, and in the other to 1. Through subsequent branching steps, a tree of sub problems (the branch-and-bound-tree) emerges. We call a sub problem a node of the tree. The outline of the branch-and-cut framework is summarized in Algorithm 1.

Algorithm 1: branch-and-cut framework
  1. start with some P ⊇ PC(G)
  2. solve (ub) = cx * = max{cx |  x ∈ P}
  3. (lb): value of a cut found heuristically
  4. if (ub) = (lb) or x * is a cut then
    • stop
  5. else
    • find a better desciption P of PC(G)
    • go back to 2
  6. if no better desciption can be found then
    • branch: select a variable xij with xij *  ∉ {0;1}
      generate two sub problems in one of which xij is set to 0
      and in the other to 1

Tightening relaxations through separation

In the following we explain how to determine partial descriptions of the cut polytope in step 5 of the branch-and-cut Algorithm 1. These partial descriptions are called linear relaxations, cf. Relaxation.
The basic idea in determining progressingly tighter relaxations of PC(G) is as follows. We start by generating some 'easy' relaxation P (e.g., the unit hypercube [0...1] | E | .) Let the optimum solution of (MCRbe cTx * . We test whether x * satisfies inequalities that are known to be valid for PC(G). In case we can generate a valid inequality that is violated by x * , we can add it to the description of P. By adding this inequality to P we 'cut off' x * from it, and these inequalities are sometimes called cutting planes. The resulting polytope will be a 'tighter' relaxation of PC(Gthan P was and hence improve the upper bound. We also say we separate x * from the cut polytope, c.f. Valid inequalities for max-cut. The corresponding problem is called the separation problem. As we usually don't know how to solve the separation problem for valid inequalities of any kind, we study it for each class of inequalities separately. We formally define the separation problem in the next Definition.

Definition. (Separation problem)
Given a class of valid inequalities for PC(G) and a vector \scriptstyle x^*\ \in\ \mathbb{R}^m, either prove that x * satisfies all inequalities of this class, or return an inequality violated by x * .

An algorithm that solves the separation problem is called exact separation algorithm. Unfortunately, such an algorithm is often not known for a class of valid inequalities or it can be shown that solving the separation problem is an NP-hard problem. In this case we have to use a heuristic separation algorithm. Heuristic separation algorithms may find violated inequalities, but maybe not all. We explain some relevant classes of inequalities successfully used within branch-and-cut here: Valid inequalities for max-cut.

Personal tools