Relaxation

From Cophy
Jump to: navigation, search

Contents

Introduction

In general a relaxation of a problem aims at making a problem (computationally) easier by weakening the constraints to the set of solutions. Therefore the number of solutions for a relaxation increases. In optimization this is often used to compute a bound on the optimum value if it is hard to compute (cf. branch-and-bound, branch-and-cut).

Definitions

Let a polytope P of a graph G be contained in some polytope Q. Assume we need to optimize a linear function over P, i.e., need to solve the optimization problem

\scriptstyle
\begin{matrix}
max & c^Tx \\
x & \in\ P
\end{matrix}

If we optimize over Q instead of P, i.e.,

\scriptstyle
\begin{matrix}
max & c^Tx \\
x & \in\ Q
\end{matrix}

then we call it a linear relaxation. We sometimes also call Q itself a relaxation of P.

Let two problems

\scriptstyle (P)\ min_x\{f(x)\ :\ x\ \in\ S\},\,\!
\scriptstyle (Q)\ min_x\{g(x)\ :\ x\ \in\ R\}\,\!

be given. The problem (Q) is called relaxation of problem (P) if the following conditions are satisfied:

\scriptstyle (a)\ S\ \subseteq\ R\,\!
\scriptstyle (b)\ g(x)\ \le\ f(x)\,\! for all elements x ∈ S.

Condition (a) says that the set of feasible solutions of the relaxation contains that of S, i.e., all elements feasible for the original optimization problem remain feasible for the relaxation. Condition (b) makes sure that the optimum of the relaxation is a lower bound for the original optimization problem.

Often one finds in applications that the functions f and g are equal. Then (Q) is a relaxation of (P) iff S is a subset of R.
Therefore the notation is also used for sets. A set R is called relaxation of set S iff the inclusion S ⊆ R holds.

Often a relaxation of a given problem (P) is obtained by skipping some of the information that is used to describe the set S, e.g, integrality constraints or other restrictions that are difficult to handle.

Example

Consider the following discrete problem (P):

Relaxation CoordSys.jpg
\scriptstyle
\begin{matrix}
-6x_1 - 5x_2 & \rightarrow\ min\\
-x_1 + x_2 & \le\ 1\\
3x_1 + 2x_2 & \le\ 9\\
x_1, x_2 & \ge\ 0\\
x_1, x_2 & \in\ \mathbb{Z}
\end{matrix}

Considering this problem we have \scriptstyle f(x_1,x_2) = -6x_1 - 5x_2 and

\scriptstyle S=\{(x_1,x_2)\ \in\ \mathbb{Z}^2\ :\ -x_1 + x_2\ \le\ 1,\ 3x_1 + 2x_2\ \le\ 9,\ x_1, x_2\ \ge\ 0\}.

To obtain a relaxation we skip the integrality constraints in the set S. Then we have a relaxation (Q) with g = f and

\scriptstyle R = \{(x_1, x_2)\ \in\ \mathbb{R}^2\ :\ -x_1 + x_2\ \le\ 1,\ 3x_1 + 2x_2\ \le\ 9,\ x_1, x_2\ \ge\ 0\}.

The feasible set S of problem (P) as well as its continuous relaxation R are shown in the figure above. In this figure S and R are described by the points and the red area, respectively. The additional line in the figure shows all point (x1,x2) with the function value f(x1,x2) = − 23. The arrow gives the direction of optimization.


Theorem

The optimum function value of the relaxation (Q) is a lower bound for the optimum function value of problem (P).
Furthermore, if the relaxation (Q) has an optimum solution \scriptstyle \tilde{x} with \scriptstyle \tilde{x}\ \in\ S and \scriptstyle f(\tilde{x}) = g(\tilde{x}), then \scriptstyle \tilde{x} is an optimum solution for problem (P), too.

We check the statements of the theorem for the example.
It holds

\scriptstyle f^\star = \min\limits_{x\ \in\ S} f(x) = f(3,0) = -18\ \text{and}\ g^\star = \min\limits_{x\ \in R} g(x) = f(1.4,2.4) = -20.

Thus, the first statement \scriptstyle f^\star\ \ge\ g^\star is true. We can not apply the second statement since xR = (1.4,2.4) is a unique optimum solution of the relaxation (R) and xR ∉ S since 1.4 and 2.4 are not integer.

Personal tools