# Simplex method

method of sequential plan improvement

A method for solving the general linear programming problem:

$$\sum _ {j = 1 } ^ { n } c _ {i} x _ {j} \mapsto \max ; \ \ \sum _ {j = 1 } ^ { n } A _ {j} x _ {j} = A _ {0} ;$$

$$x _ {j} \geq 0,\ j = 1, \dots, n,$$

where $A _ {j} = ( a _ {1j}, \dots, a _ {mj} ) ^ {T}$ for $j = 0, \dots, n$.

The simplex method is the most widespread linear programming method. It consists of moving over adjacent vertices of the polyhedral set defined by the constraints of the linear programming problem and requires a finite sequence of iterations. The basis of a vertex $x = ( x _ {1}, \dots, x _ {n} )$ of the polyhedral feasible set of the problem is the system of $m$ linearly independent vectors $A _ {s _ {1} }, \dots, A _ {s _ {m} }$ ($s _ \alpha \geq 1$, $\alpha = 1, \dots, m$) such that $x _ {j} = 0$ if $j \notin \{ s _ {1}, \dots, s _ {m} \}$. The input for each iteration of the method is a combination of the basis $A _ {x} = ( A _ {s _ {1} }, \dots, A _ {s _ {m} } )$ of $x$, the parameters $x _ {ij}$ defined by

$$A _ {j} = \ \sum _ {i = 1 } ^ { m } x _ {ij} A _ {s _ {i} } ,\ \ j = 0, \dots, n$$

(in particular, $x _ {i0} = x _ {s _ {i} }$ are the basis components of $x$), and the parameters

$$\Delta _ {j} = \ \sum _ {i = 1 } ^ { m } c _ {s _ {i} } x _ {ji} - c _ {j} ,\ \ j = 1, \dots, n.$$

If $\Delta _ {j} \geq 0$ for all $j$, then $x$ is the desired solution of the problem. Otherwise one chooses a negative parameter $\Delta _ {k}$. If none of the $x _ {ik}$, $i = 1, \dots, m$, are positive , then the problem is unsolvable, due to the unboundedness of the objective function of the problem on its polyhedral set. In the case when some $x _ {ik}$ is positive , $x$ is replaced by $x ^ \prime = x + \theta x ^ {k}$, where

$$x ^ {k} = ( x _ {1} ^ {k} ,\dots, x _ {n} ^ {k} ),\ \ x _ {s _ {i} } ^ {k} = - x _ {ik} ,\ \ i = 1, \dots, m,\ \ x _ {k} ^ {k} = 1,$$

the remaining components $x ^ {k}$ being zero, and

$$\theta = \ \min _ {i: x _ {ik} > 0 } \ \frac{x _ {i0} }{x _ {ik} } = \ \frac{x _ {r0} }{x _ {rk} } .$$

Then the vertex $x ^ \prime$ has a basis $A _ {x ^ \prime }$, which differs from $A _ {x}$ in that $A _ {s _ {r} }$ is replaced by $A _ {k}$. The parameters $x _ {ij} ^ \prime$ and $\Delta _ {j} ^ \prime$ for $A _ {x ^ \prime }$ are expressed in terms of $x _ {ij}$ and $\Delta _ {j}$ by simple recurrence formulas.

Case

means that along every edge of the polyhedral set beginning at $x$, the objective function does not increase. Cases

and

correspond to the existence of an edge along which the objective function increases, where in case

this edge is a ray, and in case

an interval with $x ^ \prime$ being the other end. Iterations are continued until one obtains the optimal vertex or has to conclude that the problem is unsolvable.

The implementation of the simplex method, used when the problem is of a sufficiently large size, is usually based on another of its algorithmic implementations, in which the basis of the input for every iteration is the inverse matrix $A _ {x} ^ {- 1}$ of $A _ {x}$ (the inverse matrix method). It is designed for linear programming problems whose matrix $A = ( A _ {1}, \dots, A _ {n} )$ has the property of sparseness (the number of non-zero $a _ {ij}$ is small compared to $mn$). Sparse matrices can be kept in the computer's memory in a compact form, in which only the non-zero elements and their positions are fixed. Special ways of storing the inverse of the basis have been developed, allowing one to reduce the memory used for the description of $A _ {x} ^ {- 1}$. They are based on representing $A _ {x} ^ {- 1}$ as a product of matrix multipliers (multiplicators) that differ from the identity matrix in only one column. The number of non-zero elements of the multiplicators depends on the order in which the vectors are introduced into the basis. Therefore, after the accumulation of a certain number of multiplicators, a so-called repetition is organized, which gives a shorter multiplicative representation of $A _ {x} ^ {- 1}$.

An important part of the algorithm of the simplex method is the strategy used to select vectors $A _ {k}$ for introduction in the basis. On the one hand, it must promote the reduction of the memory used for the description of $A _ {x} ^ {- 1}$, and on the other, it must prevent hitting on an ill-conditioned basis.

There are implementations of the simplex method for the solution of linear programming problems having sparse constraint matrices, with $m$, $n$ of order thousand and tens of thousand, respectively.

Several variants of the simplex method have been developed that take into account the peculiarities of various special classes of linear programming problems (block problems, transportation problems and others).

In spite of the fact that the simplex method is theoretically not sufficiently effective (its worst-case efficiency estimate on the class of linear programming problems is exponential, although the algorithmic complexity of the class as a whole is only polynomial), until now (1990) no serious competitors have been suggested.

#### References

 [1] D.B. Yudin, E.G. Gol'shtein, "Linear programming" , Israel Program Sci. Transl. (1965) (Translated from Russian) [2] J. Danzig, "Linear programming and extensions" , Princeton Univ. Press (1963) [3] S.A. Ashmanov, "Linear programming" , Moscow (1981) (In Russian)