# Integer programming

A branch of mathematical programming in which one investigates problems of optimization (maximization or minimization) of functions of several variables that are related by a number of equations and (or) inequalities and that satisfy the condition of being integral valued. (Other terms are discrete programming, discrete optimization.) Sources of integer programming problems are technology, economy and defense.

The condition of the variables being integral valued formally reflects: a) the physical indivisibility of the objects (for example, in the distribution of enterprises or in the choice of combat actions); b) the finiteness of the set of feasible variants on which the optimization proceeds (for example, the set of permutations in problems of ordering); and c) the presence of logical conditions, which by holding or not holding exert a change in the form of the objective function and the constraints of the problem.

The most widely studied and most used integer programming problem is the so-called integer linear programming problem: Maximize

$$\sum_{j=1}^nc_jx_j$$

subject to

$$\sum_{j=1}^na_{ij}x_j=b_i,\quad i=1,\dots,m,$$

where $x_j\geq0$, $j=1,\dots,n$, the $x_j$ are integers for $j=1,\dots,p$, $p\leq n$, the $a_{ij}$, $b_i$ and $c_j$ are given integers, and the $x_j$ are variables.

The solution methods for integer programming problems (relaxation (cf. Relaxation method), cutting planes, dynamic programming, "branch-and-bound", and others) are based on a reduction in the amount of feasible solutions. The "naive" approach to the solution of integer programming problems, which consists of a complete enumeration of all feasible solutions (if there are finitely many), requires an amount of computational work that grows exponentially with the number of variables and turns out to be impractical. The complexity of theoretical and numerical problems that arise in the solution of integer programming problems can be illustrated by the fact that Fermat's so-called last theorem can be stated in the following equivalent form: Minimize

$$(x_1^t+x_2^t-x_3^t)^2$$

subject to

$$x_1\geq1,\quad x_2\geq1,\quad x_3\geq1,\quad t\geq3,$$

where $t$, $x_1$, $x_2$, and $x_3$ are integers. If by some method of integer programming the answer obtained is a positive value for the minimum of the objective function, then this would be a constructive proof of Fermat's theorem, and if the answer is 0, then it would be a refutation.

The central theoretical problem in integer programming is: Can one avoid complete enumeration in solving integer programming problems? One of the mathematical formulations of this problem is: Do the classes $\mathcal P$ and $\mathcal{NP}$ coincide? The class $\mathcal P$ (or $\mathcal{NP}$) consists of all decision problems that can be solved on a deterministic (non-deterministic) Turing machine in polynomial time, that is, in a number of computational operations that depends as a polynomial on the so-called "input size" of the problem. The class $\mathcal{NP}$ includes all decision versions of integer programming problems that have an exponential number of feasible solutions (relative to the "input size" of the problem). The problem "P=NP?" remains open at present (1988).

#### References

 [1] E.G. Gol'shtein, D.B. Yudin, "New directions in linear programming" , Moscow (1966) (In Russian) [2] A.A. Korbut, Yu.Yu. Finkel'shtein, "Discrete programming" , Moscow (1969) (In Russian) [3] M.R. Garey, D.S. Johnson, "Computers and intractability: a guide to the theory of NP-completeness" , Freeman (1979) [4] G.L. Nemhauser, L.A. Wolsey, "Integer and combinatorial optimization" , Wiley (1988)

For more information on $\mathcal P$ and $\mathcal{NP}$ see Complexity theory.
Unless $\mathcal P=\mathcal{NP}$, the general integer programming problem is not solvable in polynomial time, whereas the general linear programming problem is solvable in polynomial time.