Dantzig-Wolfe Decomposition
Dantzig-Wolfe Decomposition
Problem Formulation
The Dantzig-Wolfe decomposition is designed to deal with large scale LPs with special structures. Consider the following LP:
where \(\mathbf{D} \in \mathbb{R}^{m_1 \times n}\) and \(\mathbf{F} \in \mathbb{R}^{m_2 \times n}\). In general, the first set of equality constraints are called the coupling constraints. The second set of equality constraints are independent.
Constraint Separation
Suppose that the first set \(\mathbf{D} \mathbf{x} = \mathbf{b}_0\) is harder to deal with. Define a polyhedron \(P\):
Then the original LP can be written as:
where we have singled out the hard constraint \(\mathbf{D} \mathbf{x} = \mathbf{b}_0\). \(P\) is assumed to be a bounded polyhedron, i.e., polytope, and therefore must have a finite number of extreme points.
Extreme Points Representation
Let \(\mathbf{x}^1, \mathbf{x}^2, \ldots, \mathbf{x}^N\) denote all the extreme points of \(P\), where \(\mathbf{x}^i \in \mathbb{R}^n\) and \(N\) is arbitrarily large. Any point in a polytope (i.e., bounded polyhedron) can be represented as a convex combination of its extreme points:
where the extreme points \(\mathbf{x}^i\)'s are fixed for the polyhedron \(P\). Then, we can write \((LP_2)\) as:
The master problem, \((MP)\), is equivalent to the original problem \((LP)\), namely, if \(\lambda_1, \ldots, \lambda_N\) is optimal for \((MP)\).
Column Generation
\((MP)\) has an arbitrarily large \(N\) extreme points, corresponding variables \(\lambda_1, \lambda_2, \ldots, \lambda_N\) with \((m_1 + 1)\) equality constraints. However, \((MP)\) doesn not have many constraints but have many variables, i.e., columns. We can rewrite \((MP)\) as:
We apply Column Generation algorithm to solve \((MP)\):
-
Choose a subset of \(I\) of columns and variables. The restricted master problem will be:
\[ \begin{align} (RMP) \quad \min_{\lambda_i, \ \forall i \in I} \quad& \sum_{i \in I} \lambda_i \left( \mathbf{c}^T \mathbf{x}^i \right) \\ \text{s.t.} \quad& \sum_{i \in I} \lambda_i \left( \mathbf{D} \mathbf{x}^i \right) = \mathbf{b}_0 \quad \leftarrow \hat{\mathbf{y}} \\ & \sum_{i \in I} \lambda_i = 1 \quad \leftarrow \hat{r} \\ &\lambda_i \geq 0, \ \forall i \in I. \end{align} \]Obtain an optimal solution \(\hat{\boldsymbol{\lambda}}\) of \((RMP)\), and an optimal dual solution \(\hat{\mathbf{y}}\) and \(\hat{r}\).
-
Check the optimality of \(\hat{\boldsymbol{\lambda}}\) using the dual information. Find the minimum reduced cost:
\[ Z = \min_{i = 1, \ldots, N} \left\{ \left( \mathbf{c}^T \mathbf{x}^i \right) - \hat{\mathbf{y}}^T \left( \mathbf{D} \mathbf{x}^i \right) - \hat{r} \right\}. \]If \(Z \geq 0\), then \(\hat{\boldsymbol{\lambda}}\) is optimal for \((MP)\) and terminate. Otherwise, add the column with the minimum reduced cost to \((RMP)\), and continue.
Enumerating all the extreme points in (2) is computationally expensive. Moreover, we do not know all the extreme points. We need to be able to generate them on the fly. The key idea of Dantzig-Wolfe is that enumerating through all the extreme points of \(P\) is equivalent to minimizing the reduced cost over the polytope \(P\) directly.
Note that the dual variable \(\hat{r}\) is fixed, so it can be pulled outside of the minimization problem. The column generation algorithm can be rewritten as:
-
Choose a subset \(I\) of columns and variables. Solve the \((RMP)\):
\[ \begin{align} (RMP) \quad \min_{\lambda_i, \ \forall i \in I} \quad& \sum_{i \in I} \lambda_i \left( \mathbf{c}^T \mathbf{x}^i \right) \\ \text{s.t.} \quad& \sum_{i \in I} \lambda_i \left( \mathbf{D} \mathbf{x}^i \right) = \mathbf{b}_0 \quad \leftarrow \hat{\mathbf{y}} \\ &\sum_{i \in I} \lambda_i = 1 \quad \leftarrow \hat{r} \\ &\lambda_i \geq 0, \forall i \in I. \end{align} \]Obtain the optimal solution \(\hat{\boldsymbol{\lambda}}\) of \((RMP)\) and the optimal dual solution \(\hat{\mathbf{y}}\) and \(\hat{r}\).
-
To check the optimality of \(\hat{\boldsymbol{\lambda}}\), we solve the following subproblem \((SP)\):
\[ \begin{align} (SP) \quad w = \min \quad& \left( \mathbf{c}^T - \hat{\mathbf{y}}^T \mathbf{D} \right) \mathbf{x} \\ \text{s.t.} \quad& \mathbf{F} \mathbf{x} = \mathbf{b}, \ \mathbf{x} \geq 0. \end{align} \]If \(w \geq \hat{r}\), then \(\hat{\boldsymbol{\lambda}}\) is optimal for \((MP)\). Terminate. Otherwise, the optimal BFS of the above problem is an extreme point of the polyhedron \(P\). Denote it as \(\mathbf{x}^i\). Add the column \(\left[ \begin{array}{c} \mathbf{D} \mathbf{x}^i \\ 1 \end{array} \right]\) to the constraints matrix in \((RPM)\), and add \(\mathbf{c}^T \mathbf{x}^i\) to the cost coefficient.
Note that the dual variables \(\hat{\mathbf{y}}\) and \(\hat{r}\) can be obtained from computing \(\mathbf{c}^T_B \mathbf{B}^{-1}\) as done in simplex method. In particular, \(\hat{\mathbf{y}}\) is the first \(m_1\) components of \(\mathbf{c}^T_B \mathbf{B}^{-1}\), and \(\hat{r}\) is the last component of \(\mathbf{c}^T_B \mathbf{B}^{-1}\). Here, \(\mathbf{c}_B\) is the vector of coefficients of \(\lambda_i\)'s corresponding to the basic variables, and \(\mathbf{B}\) is the basis matrix of \(\left[ \begin{array}{cccc}\mathbf{D} \mathbf{x}^1 & \mathbf{D} \mathbf{x}^2 & \ldots & \mathbf{D} \mathbf{x}^N \\ 1 & 1 & \ldots & 1 \end{array} \right]\).
Numerical Example
Consider the following LP:
Divide the constraints into two groups: The first group consists of the constraint \(\mathbf{D} \mathbf{x} = \mathbf{b}_0\) where \(\mathbf{D} = \left[\begin{array}{ccc} 3 & 2 & 4 \end{array} \right]\) and \(\mathbf{b}_0 = 17\). The second group is the constraint \(\mathbf{x} \in P\), where the polyhedron \(P = \left\{ \mathbf{x} \in \mathbb{R}^3 \ : \ 1 \leq x_i \leq 2, \ \forall i = 1, 2, 3 \right\}\). \(P\) is a 3-D cube, which has eight extreme points \(\mathbf{x}^1, \ldots, \mathbf{x}^8\); it is bounded and has no extreme rays.
The master problem \((MP)\) in the \(\boldsymbol{\lambda}\) can be written as:
Note that \((MP)\) has two equality constraints. Pick two columns, or equivalently choose two extreme points of \(P\), to start the column generation algorithm. Pick \(\mathbf{x}^1 = (2, 2, 2)\) and \(\mathbf{x}^2 = (1, 1, 2)\), and the corresponding convex weights \(\lambda_1, \lambda_2\). Note that the specific indices do not matter. We can always order the extreme points such that the first extreme point is \((2, 2, 2)\) and the second is \((1, 1, 2)\). We have:
The \((RMP)\) can be written explicitly as:
The basis matrix and its inverse are:
The optimal solution is:
Form the dual variable:
Note that the optimal dual variable \(\hat{\mathbf{y}}\) corresponds to the hard constraints, \(18 \lambda_1 + 13 \lambda_2 = 17\), and the optimal dual variable \(\hat{r}\) corresponds to \(\lambda_1 + \lambda_2 = 1\).
To compute the minimum reduced cost, we form the pricing problem:
which is:
The optimal solution is \(\mathbf{x}^3 = \left[\begin{array}{ccc}2 & 1 & 2\end{array}\right]\), which is another extreme point of the cube \(P\). The optimal cost \(\hat{Z} = -1\), therefore, the reduced cost of \(\lambda_3\) is \(-1\). This variables enters the restricted problem. The associated column is \(\left[\begin{array}{cc} \mathbf{D} \mathbf{x}^3 & 1 \end{array}\right] = \left[\begin{array}{cc}16 & 1 \end{array} \right]\), and the associated cost coefficient is \(\mathbf{c}^T \mathbf{x}^3 = -21\).
The new \((RMP)\) is:
The optimal solution is \(\hat{\lambda}_1 = 0.5\), \(\hat{\lambda}_2 = 0\), \(\hat{\lambda}_3 = 0.5\). The optimal basis matrix and its inverse are:
The dual variables and the cost coefficients are:
The pricing porblem is:
The optimal \(\hat{Z} = 0\), implying the current solution is optimal for the master problem. Hence, the final optimal solution is: