From Encyclopedia of Mathematics
Jump to: navigation, search

One major goal of complexity theory is to obtain precise bounds on the number of operations performed by algorithms for solving a particular problem. This purpose splits into two directions: the design of specific algorithms leads to upper bounds on the computational complexity of a problem; the analysis of all permitted algorithms together with the structure of a problem can provide lower bounds (cf. also Algorithm; Algorithm, computational complexity of an).

If both bounds match, the ideal situation of an optimal algorithm is reached. However, in many situations such optimal bounds are not (yet) available. Consequently, one is tempted to give at least a classification of problems in terms of statements like: one problem is at least as hard to solve as another one. This leads to the notions of reducibility and completeness.

Consider a computational model for uniform algorithms over a set $\Sigma$. Two prominent examples are

1) The Turing machine: here, $\Sigma$ is a finite alphabet, typically $\Sigma = \{ 0,1 \}$. Operations used are bit operations on strings over $\Sigma$ (see [a2]).

2) The real Turing or Blum–Shub–Smale machine (BSS model): here, $\Sigma$ is the set of real numbers. Operations used are the arithmetic operations $+ , - , * , :$, together with a test-instruction "x≥ 0?" for an $X \in \mathbf R$ (see [a1]).

A decision problem is a subset $S$ of the set $\Sigma ^ { * } = \cup _ { n \geq 1 } \Sigma ^ { n }$. An algorithm solves $S$ if it computes the characteristic function of $S$ in $\Sigma ^ { * }$. In order to speak about complexity, a size measure for a problem instance $x \in \Sigma ^ { * }$ as well as a cost measure for algorithms must be introduced. In the above settings, the size of $x \in \Sigma ^ { n }$ is $\operatorname { size } ( x ) = n$, whereas the cost or running time of a corresponding algorithm is the number of executed operations. (In particular, for computations over integers the Turing model uses a logarithmic size measure together with bit-costs whereas in the BSS model one exploits algebraic measures.)

Next, complexity classes (of decision problems) can be defined by bounding the running time of an algorithm in the size of a problem instance. Of particular importance is the class $\mathcal{P}$ of problems solvable in polynomial time. A problem is in $\mathcal{P}$ if it can be decided by an algorithm the running time of which is bounded by a polynomial function in the input size.

For example, the linear programming problem is in $\mathcal{P}$ over the integers in the Turing model (see [a4]). It is not yet (1998) known to belong to $P$ over the real numbers.

Problems in $\mathcal{P}$ are considered to be efficiently solvable. However, for a large class of problems it can only be shown that there exist fast verification procedures instead of fast decision procedures. The latter is formalized by defining the class $\cal N P$.

A decision problem $S$ is in the class $\cal N P$ if there exists an algorithm $M$ such that the following holds: As input $M$ takes an element $x \in \Sigma ^ { * }$ (the current problem-instance) together with an additional "guess" $z \in \Sigma ^ { * }$ (representing a possible proof of "x S" ). If $x \in S$, there must exist a suitable guess $\overline{z}$ such that $M$ on input $( x , \overline{z} )$ accepts in polynomial time with respect to $\operatorname{size}( x )$. If $x \notin S$, no $z$ exists for which $M ( x , z )$ accepts.

Such algorithms are called non-deterministic since there is no information available on how to obtain a correct $z$ efficiently.

Note that $\mathcal{P} \subset \mathcal{NP}$; the question whether equality holds is the major open (1998) problem in complexity theory.


Turing model.

Given a graph $G$ over $n$ nodes, the $H C$ problem asks for the existence of a Hamiltonian circuit (cf. also Graph circuit). A fast verification procedure is given by guessing a permutation of $\{ 1 , \ldots , n \}$ and checking whether it represents such a circuit of $G$. Thus, $H C \in \mathcal{NP}$ over $\Sigma = \{ 0,1 \}$.


Given a polynomial $f \in \mathbf{R} [ x _ { 1 } , \dots , x _ { n } ]$ of degree $4$, the problem $F ^ { 4 }$ asks for the existence of a real zero for $f$. A fast verification procedure is given by guessing a possible zero $x$, plugging it into $f$ and checking whether $f ( x ) = 0$. Thus $F ^ { 4 } \in \mathcal{N} \mathcal{P}$ over $\Sigma = \mathbf{R}$.


For the above examples (and for many more), no optimal complexity bounds are known. Nevertheless, among all other problems within the corresponding $\cal N P$ class they have a particular status. A decision problem $A$ is called reducible in polynomial time to another problem $B$ if there exists a polynomial-time algorithm $M$ mapping $\Sigma ^ { * }$ to $\Sigma ^ { * }$ such that $M ( x ) \in B$ if and only if $x \in A$. Problem $B$ is called $\cal N P$-hard if all problems in $\cal N P$ are reducible to $B$ in polynomial time. It is called $\cal N P$-complete if, in addition, $B \in \mathcal{N} \mathcal{P}$ holds.



$H C$ is $\cal N P$-complete over $\Sigma = \{ 0,1 \}$ (cf. [a2]).


$F ^ { 4 }$ is $\cal N P$-complete over $\Sigma = \mathbf{R}$ (cf. [a1]).


If any $\cal N P$-hard or $\cal N P$-complete problem would be solvable in polynomial time, then $\mathcal{P} = \mathcal{N}\mathcal{P} $. This is the substantiation why $\cal N P$-complete problems are of particular interest in complexity theory.

Knowing the completeness or hardness of a problem does not remove the necessity of solving it. There is a huge literature on that issue in connection with many complete problems. As starting point with respect to the above-mentioned problems in graph theory and polynomial system solving, see [a3], respectively [a1].

Finally, it should be noted that hardness and completeness notions can also be defined with respect to other resources, such as space or parallel computational devices (cf. [a2]).


[a1] L. Blum, F. Cucker, M. Shub, S. Smale, "Complexity and real computation" , Springer (1997)
[a2] M.R. Garey, D.S. Johnson, "Computers and intractability" , Freeman (1979)
[a3] M. Grötschel, L. Lovász, A. Schrijver, "Geometric algorithms and combinatorial optimization" , Springer (1988)
[a4] L.G. Khachiyan, "A polynomial algorithm in linear programming" Soviet Math. Dokl. , 20 (1979) pp. 191–194 Dokl. Akad. Nauk SSSR , 244 (1979) pp. 1093–1096
How to Cite This Entry:
NP. Encyclopedia of Mathematics. URL:
This article was adapted from an original article by Hubertus Th. JongenKlaus Meer (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article