Namespaces
Variants
Actions

Difference between revisions of "Complexity theory"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
c0242301.png
 +
$#A+1 = 164 n = 0
 +
$#C+1 = 164 : ~/encyclopedia/old_files/data/C024/C.0204230 Complexity theory
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
The classification of mathematical problems into decidable and undecidable ones is a most fundamental one. It is also of definite practical significance in discouraging attempts to design too-general systems, such as systems for deciding the equivalence of programs or the halting of a program.
 
The classification of mathematical problems into decidable and undecidable ones is a most fundamental one. It is also of definite practical significance in discouraging attempts to design too-general systems, such as systems for deciding the equivalence of programs or the halting of a program.
  
 
However, this classification is too coarse in many respects. A finer classification of decidable problems in terms of their complexity is discussed below. Two problems might both be decidable and yet one might be enormously more difficult to compute, which in practice might make this problem resemble an undecidable one. For example, if instances of reasonable size of one problem take only a few seconds to compute, whereas instances of comparable size of the other problem take millions of years (even if best computers are available), it is clear that the latter problem should be considered intractable compared with the former one. Hence, having established the decidability of a particular problem, one should definitely study its complexity — that is, how difficult it is to settle specific instances of the problem. Complexity considerations are of crucial importance, for instance, in [[Cryptography|cryptography]]. If one is not able to decrypt a message within a certain time limit, one might as well forget the whole thing because, as time passes by, the situation might change entirely. Related material is discussed also in the article [[Algorithm, computational complexity of an|Algorithm, computational complexity of an]].
 
However, this classification is too coarse in many respects. A finer classification of decidable problems in terms of their complexity is discussed below. Two problems might both be decidable and yet one might be enormously more difficult to compute, which in practice might make this problem resemble an undecidable one. For example, if instances of reasonable size of one problem take only a few seconds to compute, whereas instances of comparable size of the other problem take millions of years (even if best computers are available), it is clear that the latter problem should be considered intractable compared with the former one. Hence, having established the decidability of a particular problem, one should definitely study its complexity — that is, how difficult it is to settle specific instances of the problem. Complexity considerations are of crucial importance, for instance, in [[Cryptography|cryptography]]. If one is not able to decrypt a message within a certain time limit, one might as well forget the whole thing because, as time passes by, the situation might change entirely. Related material is discussed also in the article [[Algorithm, computational complexity of an|Algorithm, computational complexity of an]].
  
Thus, typical questions in the theory of computational complexity would be: Is the [[Algorithm|algorithm]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c0242301.png" /> better than the algorithm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c0242302.png" /> (for the same problem) in the sense that it uses fewer resources, such as time or memory space? Does the problem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c0242303.png" /> have a best algorithm? Is the problem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c0242304.png" /> more difficult than the problem <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c0242305.png" /> in the sense that every algorithm for solving <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c0242306.png" /> is more complex than some fixed reasonable algorithm for solving <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c0242307.png" />?
+
Thus, typical questions in the theory of computational complexity would be: Is the [[Algorithm|algorithm]] $  A _ {1} $
 +
better than the algorithm $  A _ {2} $(
 +
for the same problem) in the sense that it uses fewer resources, such as time or memory space? Does the problem $  P $
 +
have a best algorithm? Is the problem $  P $
 +
more difficult than the problem $  P _ {1} $
 +
in the sense that every algorithm for solving $  P $
 +
is more complex than some fixed reasonable algorithm for solving $  P _ {1} $?
  
 
Natural complexity measures are obtained by considering Turing machines (cf. [[Turing machine|Turing machine]]): How many steps (in terms of the length of the input) does a computation require? This is a natural formalization of the notion of time resource. Similarly, the number of squares visited during a computation constitutes a natural space measure. (Cf. [[Formal languages and automata|Formal languages and automata]] and [[Computable function|Computable function]].)
 
Natural complexity measures are obtained by considering Turing machines (cf. [[Turing machine|Turing machine]]): How many steps (in terms of the length of the input) does a computation require? This is a natural formalization of the notion of time resource. Similarly, the number of squares visited during a computation constitutes a natural space measure. (Cf. [[Formal languages and automata|Formal languages and automata]] and [[Computable function|Computable function]].)
Line 15: Line 33:
 
Consider an [[Enumeration|enumeration]]
 
Consider an [[Enumeration|enumeration]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c0242308.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
$$ \tag{a1 }
 +
TM _ {1} \dots TM _ {i} , . . .
 +
$$
  
 
of all Turing machines. Enumeration (a1) determines an enumeration
 
of all Turing machines. Enumeration (a1) determines an enumeration
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c0242309.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
$$ \tag{a2 }
 +
\overline{f}\; _ {1} \dots \overline{f}\; _ {i} , . . .
 +
$$
  
 
of all partial recursive functions of one variable (cf. [[Partial recursive function|Partial recursive function]]). An enumeration
 
of all partial recursive functions of one variable (cf. [[Partial recursive function|Partial recursive function]]). An enumeration
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423010.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a3)</td></tr></table>
+
$$ \tag{a3 }
 +
f _ {1} \dots f _ {i} , . . .
 +
$$
  
 
of all partial recursive functions of one variable is termed acceptable if and only if it is possible to go effectively from (a2) to (a3), and vice versa. In other words, given an index for a function in (a2), one can find an index for the same function in (a3), and vice versa.
 
of all partial recursive functions of one variable is termed acceptable if and only if it is possible to go effectively from (a2) to (a3), and vice versa. In other words, given an index for a function in (a2), one can find an index for the same function in (a3), and vice versa.
  
A complexity measure is a pair <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423011.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423012.png" /> is an acceptable enumeration (a3) of partial recursive functions and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423013.png" /> is an infinite sequence
+
A complexity measure is a pair $  CM = ( F, \Phi ) $,  
 +
where $  F $
 +
is an acceptable enumeration (a3) of partial recursive functions and $  \Phi $
 +
is an infinite sequence
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423014.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a4)</td></tr></table>
+
$$ \tag{a4 }
 +
\phi _ {1} \dots \phi _ {i} , . . .
 +
$$
  
 
of partial recursive functions such that the Blum axioms B1 and B2 are satisfied.
 
of partial recursive functions such that the Blum axioms B1 and B2 are satisfied.
  
B1. For each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423015.png" />, the domains of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423016.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423017.png" /> coincide.
+
B1. For each $  i \geq  1 $,  
 +
the domains of $  f _ {i} $
 +
and  $  \phi _ {i} $
 +
coincide.
 +
 
 +
B2. The function  $  g ( i, n, m) $
 +
defined by
 +
 
 +
$$
 +
g ( i, n, m)  = \
 +
\left \{
  
B2. The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423018.png" /> defined by
+
\begin{array}{cl}
 +
1  & \textrm{ if }  \phi _ {i} ( n) = m,  \\
 +
0 & \textrm{ otherwise } ,  \\
 +
\end{array}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423019.png" /></td> </tr></table>
+
\right .$$
  
 
is recursive.
 
is recursive.
  
The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423020.png" /> is referred to as the complexity function, or cost function, of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423021.png" />.
+
The function $  \phi _ {i} $
 +
is referred to as the complexity function, or cost function, of $  f _ {i} $.
  
For instance, if one chooses <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423022.png" /> to be the number of steps in the computation of a Turing machine for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423023.png" /> for the input <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423024.png" />, one clearly obtains a complexity measure. Similarly, a complexity measure results if one lets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423025.png" /> be the number of squares visited during the computation of the same Turing machine for the input <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423026.png" />, provided such a variant of Turing machines is considered where no machines loops using only a finite amount of tape.
+
For instance, if one chooses $  \phi _ {i} ( n) $
 +
to be the number of steps in the computation of a Turing machine for $  f _ {i} $
 +
for the input $  n $,  
 +
one clearly obtains a complexity measure. Similarly, a complexity measure results if one lets $  \phi _ {i} ( n) $
 +
be the number of squares visited during the computation of the same Turing machine for the input $  n $,  
 +
provided such a variant of Turing machines is considered where no machines loops using only a finite amount of tape.
  
On the other hand, the choice <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423027.png" /> does not yield a complexity measure: Axiom B2 is not satisfied. The choice <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423028.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423029.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423030.png" /> does not yield a complexity measure because axiom B1 is not satisfied. These examples also show that the two axioms are independent.
+
On the other hand, the choice $  \phi _ {i} = f _ {i} $
 +
does not yield a complexity measure: Axiom B2 is not satisfied. The choice $  \phi _ {i} ( n) = 1 $
 +
for all $  i $
 +
and $  n $
 +
does not yield a complexity measure because axiom B1 is not satisfied. These examples also show that the two axioms are independent.
  
Clearly, every partial recursive function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423031.png" /> occurs infinitely many times in the sequence (a3). If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423032.png" />, then the cost function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423033.png" /> is associated to an algorithm (for instance, a Turing machine) for computing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423034.png" />, rather than to the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423035.png" /> itself. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423036.png" /> as well, then the cost function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423037.png" /> might have essentially smaller values than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423038.png" />, showing that the algorithm corresponding to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423039.png" /> is essentially better. When and how is such a speedup possible? This question of speedup is one of the most interesting ones in complexity theory.
+
Clearly, every partial recursive function $  f $
 +
occurs infinitely many times in the sequence (a3). If $  f = f _ {i} $,  
 +
then the cost function $  \phi _ {i} $
 +
is associated to an algorithm (for instance, a Turing machine) for computing $  f $,  
 +
rather than to the function $  f $
 +
itself. If $  f = f _ {j} $
 +
as well, then the cost function $  \phi _ {j} $
 +
might have essentially smaller values than $  \phi _ {i} $,  
 +
showing that the algorithm corresponding to $  f _ {j} $
 +
is essentially better. When and how is such a speedup possible? This question of speedup is one of the most interesting ones in complexity theory.
  
No matter what complexity measure and amount of speedup one considers, there are recursive functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423040.png" /> such that an arbitrary algorithm for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423041.png" /> can be sped up by that amount. Suppose <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423042.png" /> and consider the algorithm determined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423043.png" /> (for instance, the Turing machine <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423044.png" />) to be a particularly efficient one. This means that one regards the amount of resource defined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423045.png" /> to be particularly small, in view of the complexity of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423046.png" />. Suppose, further, that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423047.png" /> is a very rapidly increasing function, for instance,
+
No matter what complexity measure and amount of speedup one considers, there are recursive functions $  f $
 +
such that an arbitrary algorithm for $  f $
 +
can be sped up by that amount. Suppose $  f = f _ {i} $
 +
and consider the algorithm determined by $  f _ {i} $(
 +
for instance, the Turing machine $  TM _ {i} $)  
 +
to be a particularly efficient one. This means that one regards the amount of resource defined by $  \phi _ {i} $
 +
to be particularly small, in view of the complexity of the function $  f $.  
 +
Suppose, further, that $  h ( x) $
 +
is a very rapidly increasing function, for instance,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423048.png" /></td> </tr></table>
+
$$
 +
h ( x)  = \
 +
2 ^ {2 ^ {2 ^ {2  ^ {x} } } } .
 +
$$
  
Then there is another algorithm for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423049.png" /> using so much less resource as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423050.png" /> indicates. In other words, one has also <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423051.png" /> and
+
Then there is another algorithm for $  f $
 +
using so much less resource as $  h $
 +
indicates. In other words, one has also $  f = f _ {j} $
 +
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423052.png" /></td> </tr></table>
+
$$
 +
\phi _ {i} ( x)  \geq  \
 +
2 ^ {2 ^ {2 ^ {2 ^ {\phi _ {j} ( x) } } } } .
 +
$$
  
However, this does not necessarily hold for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423053.png" />, but only almost-everywhere. Of course, the same procedure can be repeated for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423054.png" />. This gives rise to another speedup, but now <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423055.png" /> is the algorithm using  "much"  resource. The speedup can be repeated again for the resulting function, etc. ad infinitum.
+
However, this does not necessarily hold for all $  x $,  
 +
but only almost-everywhere. Of course, the same procedure can be repeated for $  f _ {j} $.  
 +
This gives rise to another speedup, but now $  f _ {j} $
 +
is the algorithm using  "much"  resource. The speedup can be repeated again for the resulting function, etc. ad infinitum.
  
Thus, the speedup theorem implies that some functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423056.png" /> have no best algorithms: For every person's favourite algorithm <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423057.png" />, a speedup is possible. However, this is only true for some functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423058.png" /> that might be considered  "unnatural" .
+
Thus, the speedup theorem implies that some functions $  f $
 +
have no best algorithms: For every person's favourite algorithm $  f _ {i} $,  
 +
a speedup is possible. However, this is only true for some functions $  f $
 +
that might be considered  "unnatural" .
  
Now machine-oriented complexity theory is considered. Consider a Turing machine <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423059.png" /> that halts with all inputs <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423060.png" />. The time-complexity function associated with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423061.png" /> is defined by
+
Now machine-oriented complexity theory is considered. Consider a Turing machine $  TM $
 +
that halts with all inputs $  w $.  
 +
The time-complexity function associated with $  TM $
 +
is defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423062.png" /></td> </tr></table>
+
$$
 +
t _ {TM} ( n)  = \
 +
\max \{ m :\
 +
TM  \textrm{ halts }  \textrm{ after } \
 +
m  \textrm{ steps }  \textrm{ with } \
 +
\textrm{ an }
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423063.png" /></td> </tr></table>
+
$$
 +
{} \textrm{ input }  w  \textrm{ such }  \textrm{ that }  | w | = n \} .
 +
$$
  
Thus <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423064.png" /> maps the set of non-negative integers into itself. One says that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423065.png" /> is polynomial bounded if and only if there is a polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423066.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423067.png" /> holds for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423068.png" />. Denote by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423069.png" /> the family of languages acceptable by polynomially-bounded Turing machines. (So far only deterministic Turing machines have been considered — non-deterministic ones are introduced later.)
+
Thus $  t _ {TM} ( n) $
 +
maps the set of non-negative integers into itself. One says that $  TM $
 +
is polynomial bounded if and only if there is a polynomial $  P ( n) $
 +
such that $  t _ {TM} ( n) \leq  P ( n) $
 +
holds for all $  n $.  
 +
Denote by $  {\mathcal P} $
 +
the family of languages acceptable by polynomially-bounded Turing machines. (So far only deterministic Turing machines have been considered — non-deterministic ones are introduced later.)
  
Although <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423070.png" /> is defined as a family of languages, it can be visualized as the collection of problems for which there exists an algorithm operating in polynomial time. One can always identify a [[Decision problem|decision problem]] with the membership problem for the language of  "positive instances" .
+
Although $  {\mathcal P} $
 +
is defined as a family of languages, it can be visualized as the collection of problems for which there exists an algorithm operating in polynomial time. One can always identify a [[Decision problem|decision problem]] with the membership problem for the language of  "positive instances" .
  
The family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423071.png" /> is very natural from a mathematical point of view. This is seen from the fact that it is highly invariant with respect to the underlying model of computation. For instance, Turing machines <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423072.png" /> with several tapes are faster than ordinary Turing machines — that is, their time-complexity function assumes smaller values. However, if the time-complexity function of such a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423073.png" /> is bounded from above by a polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423074.png" />, one can (effectively) construct an ordinary Turing machine <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423075.png" /> with a polynomial bound <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423076.png" /> accepting the same language as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423077.png" />. (In general, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423078.png" /> assumes greater values than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423079.png" /> but is still a polynomial.) Similarly, every language that is polynomially bounded with respect to any reasonable model of computation belongs to the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423080.png" />, as defined above with respect to the ordinary Turing machine.
+
The family $  {\mathcal P} $
 +
is very natural from a mathematical point of view. This is seen from the fact that it is highly invariant with respect to the underlying model of computation. For instance, Turing machines $  TM _ {1} $
 +
with several tapes are faster than ordinary Turing machines — that is, their time-complexity function assumes smaller values. However, if the time-complexity function of such a $  TM _ {1} $
 +
is bounded from above by a polynomial $  P _ {1} ( n) $,  
 +
one can (effectively) construct an ordinary Turing machine $  TM $
 +
with a polynomial bound $  P ( n) $
 +
accepting the same language as $  TM _ {1} $.  
 +
(In general, $  P ( n) $
 +
assumes greater values than $  P _ {1} ( n) $
 +
but is still a polynomial.) Similarly, every language that is polynomially bounded with respect to any reasonable model of computation belongs to the family $  {\mathcal P} $,  
 +
as defined above with respect to the ordinary Turing machine.
  
The family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423081.png" /> is also of crucial importance because languages outside <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423082.png" /> can be visualized as impossible to compute. In fact, one says that a recursive language is intractable if it does not belong to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423083.png" />.
+
The family $  {\mathcal P} $
 +
is also of crucial importance because languages outside $  {\mathcal P} $
 +
can be visualized as impossible to compute. In fact, one says that a recursive language is intractable if it does not belong to $  {\mathcal P} $.
  
Clearly, languages outside <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423084.png" /> are intractable from the practical point of view. The same can be said about such languages in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423085.png" /> for which the polynomial bound is a huge one. However, it would not be natural to draw the borderline between tractability and intractability somewhere inside <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423086.png" />. Such a definition would also be time-varying: Drastic developments in computers could change it. On the other hand, the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423087.png" /> provides a very natural characterization of tractability.
+
Clearly, languages outside $  {\mathcal P} $
 +
are intractable from the practical point of view. The same can be said about such languages in $  {\mathcal P} $
 +
for which the polynomial bound is a huge one. However, it would not be natural to draw the borderline between tractability and intractability somewhere inside $  {\mathcal P} $.  
 +
Such a definition would also be time-varying: Drastic developments in computers could change it. On the other hand, the family $  {\mathcal P} $
 +
provides a very natural characterization of tractability.
  
Now non-deterministic Turing machines are considered: When scanning a specific symbol in a specific state, the machine may have several possibilities for its behaviour. Otherwise, a non-deterministic machine is defined as a deterministic one. A word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423088.png" /> is accepted if and only if it gives rise to an accepting computation, independently of the fact that it might also give rise to computations leading to failure. Thus, in connection with non-deterministic machines in general, all roads to failure are disregarded if there is one possible road to success.
+
Now non-deterministic Turing machines are considered: When scanning a specific symbol in a specific state, the machine may have several possibilities for its behaviour. Otherwise, a non-deterministic machine is defined as a deterministic one. A word $  w $
 +
is accepted if and only if it gives rise to an accepting computation, independently of the fact that it might also give rise to computations leading to failure. Thus, in connection with non-deterministic machines in general, all roads to failure are disregarded if there is one possible road to success.
  
The time required by a non-deterministic Turing machine <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423089.png" /> to accept a word <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423090.png" /> is defined to be the number of steps in the shortest computation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423091.png" /> accepting <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423092.png" />. The time-complexity function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423093.png" /> is now defined by
+
The time required by a non-deterministic Turing machine $  TM $
 +
to accept a word $  w \in L ( TM) $
 +
is defined to be the number of steps in the shortest computation of $  TM $
 +
accepting $  w $.  
 +
The time-complexity function of $  TM $
 +
is now defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423094.png" /></td> </tr></table>
+
$$
 +
t _ {TM} ( n)  = \
 +
\max \{ 1, m :\
 +
TM  \textrm{ accepts }  \mathop{\rm in} \
 +
\textrm{ time }  m
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423095.png" /></td> </tr></table>
+
$$
 +
{} \textrm{ some }  w  \textrm{ where }  | w | = n \} .
 +
$$
  
Thus, only accepting computations enter the definition of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423096.png" />. If no words of length <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423097.png" /> are accepted, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423098.png" />.
+
Thus, only accepting computations enter the definition of $  t _ {TM} ( n) $.  
 +
If no words of length $  n $
 +
are accepted, $  t _ {TM} ( n) = 1 $.
  
 
Further formal details of the definition of a non-deterministic Turing machine are omitted.
 
Further formal details of the definition of a non-deterministic Turing machine are omitted.
  
Having defined the time-complexity function, the notion of a polynomially-bounded non-deterministic Turing machine is defined exactly as before. Denote by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c02423099.png" /> the family of languages acceptable by non-deterministic polynomially-bounded Turing machines.
+
Having defined the time-complexity function, the notion of a polynomially-bounded non-deterministic Turing machine is defined exactly as before. Denote by $  {\mathcal N} {\mathcal P} $
 +
the family of languages acceptable by non-deterministic polynomially-bounded Turing machines.
  
Problems in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230100.png" /> are tractable, whereas problems in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230101.png" /> have the property that it is tractable to check whether or not a good guess for the solution of the problem is correct. A non-deterministic Turing machine may be visualized as a device that checks whether or not a guess is correct: It makes a guess (or several guesses) at some stage during its computation, and the final outcome is acceptance only in case the guess was (or the guesses were) correct. Thus, a time bound for a non-deterministic Turing machine is, in fact, a time bound for checking whether or not a guess for the solution is correct.
+
Problems in $  {\mathcal P} $
 +
are tractable, whereas problems in $  {\mathcal N} {\mathcal P} $
 +
have the property that it is tractable to check whether or not a good guess for the solution of the problem is correct. A non-deterministic Turing machine may be visualized as a device that checks whether or not a guess is correct: It makes a guess (or several guesses) at some stage during its computation, and the final outcome is acceptance only in case the guess was (or the guesses were) correct. Thus, a time bound for a non-deterministic Turing machine is, in fact, a time bound for checking whether or not a guess for the solution is correct.
  
Clearly, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230102.png" /> is contained in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230103.png" />. However, it is not known whether or not the containment is proper. The problem of whether or not <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230104.png" /> equals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230105.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230107.png" />?) can justly be called the most celebrated open problem in the theory of computation. The significance of this question is due to the fact that many practically important problems are known to be in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230108.png" />, whereas it is not known whether or not they are in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230109.png" />. In fact, all known deterministic algorithms for these problems are exponential as far as time is concerned. Thus, a proof of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230110.png" /> would make all of these problems tractable.
+
Clearly, $  {\mathcal P} $
 +
is contained in $  {\mathcal N} {\mathcal P} $.  
 +
However, it is not known whether or not the containment is proper. The problem of whether or not $  {\mathcal P} $
 +
equals $  {\mathcal N} {\mathcal P} $(
 +
$  {\mathcal P} = {\mathcal N} {\mathcal P} $?)  
 +
can justly be called the most celebrated open problem in the theory of computation. The significance of this question is due to the fact that many practically important problems are known to be in $  {\mathcal N} {\mathcal P} $,  
 +
whereas it is not known whether or not they are in $  {\mathcal P} $.  
 +
In fact, all known deterministic algorithms for these problems are exponential as far as time is concerned. Thus, a proof of $  {\mathcal P} = {\mathcal N} {\mathcal P} $
 +
would make all of these problems tractable.
  
Non-deterministic Turing machines and guessing are not, as such, intended to be modeling computation. Non-determinism is merely an auxiliary notion and, as will be seen, a very useful one. Indeed, if one wants to settle the question of whether or not <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230111.png" />, the following definitions and results show that it suffices to consider one particular language (might be a favourite one!) and determine whether or not it is in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230112.png" />. There is a great variety of such so-called <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230113.png" />-complete languages (see below), resulting from practically all areas of mathematics.
+
Non-deterministic Turing machines and guessing are not, as such, intended to be modeling computation. Non-determinism is merely an auxiliary notion and, as will be seen, a very useful one. Indeed, if one wants to settle the question of whether or not $  {\mathcal P} = {\mathcal N} {\mathcal P} $,  
 +
the following definitions and results show that it suffices to consider one particular language (might be a favourite one!) and determine whether or not it is in $  {\mathcal P} $.  
 +
There is a great variety of such so-called $  {\mathcal N} {\mathcal P} $-
 +
complete languages (see below), resulting from practically all areas of mathematics.
  
A language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230114.png" /> is polynomially reducible to a language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230115.png" /> — in symbols <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230116.png" /> — if and only if there is a deterministic polynomially-bounded Turing machine <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230117.png" /> translating words <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230118.png" /> over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230119.png" /> into words <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230120.png" /> over <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230121.png" /> in such a way that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230122.png" /> being in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230123.png" /> is equivalent to its image <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230124.png" /> being in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230125.png" />.
+
A language $  L _ {1} \subseteq \Sigma _ {1}  ^ {*} $
 +
is polynomially reducible to a language $  L _ {2} \subseteq \Sigma _ {2}  ^ {*} $—  
 +
in symbols $  L _ {1} \leq  _ {p} L _ {2} $—  
 +
if and only if there is a deterministic polynomially-bounded Turing machine $  TM $
 +
translating words $  w _ {1} $
 +
over $  \Sigma _ {1} $
 +
into words $  w _ {2} $
 +
over $  \Sigma _ {2} $
 +
in such a way that $  w _ {1} $
 +
being in $  L _ {1} $
 +
is equivalent to its image $  w _ {2} $
 +
being in $  L _ {2} $.
  
Observe that the Turing machine <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230126.png" /> introduced in the previous definition must halt with all inputs — this is a consequence of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230127.png" /> being deterministic and polynomially bounded.
+
Observe that the Turing machine $  TM $
 +
introduced in the previous definition must halt with all inputs — this is a consequence of $  TM $
 +
being deterministic and polynomially bounded.
  
Clearly, whenever <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230128.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230129.png" /> is in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230130.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230131.png" /> is also in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230132.png" />.
+
Clearly, whenever $  L _ {1} \leq  _ {p} L _ {2} $
 +
and $  L _ {2} $
 +
is in $  {\mathcal P} $,  
 +
then $  L _ {1} $
 +
is also in $  {\mathcal P} $.
  
A language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230133.png" /> is called <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230135.png" />-hard if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230136.png" /> holds for every language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230137.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230138.png" />. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230139.png" /> is termed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230141.png" />-complete if it is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230142.png" />-hard and, in addition, belongs to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230143.png" />.
+
A language $  L $
 +
is called $  {\mathcal N} {\mathcal P} $-
 +
hard if $  L  ^  \prime  \leq  _ {p} L $
 +
holds for every language $  L  ^  \prime  $
 +
in $  {\mathcal N} {\mathcal P} $.  
 +
$  L $
 +
is termed $  {\mathcal N} {\mathcal P} $-
 +
complete if it is $  {\mathcal N} {\mathcal P} $-
 +
hard and, in addition, belongs to $  {\mathcal N} {\mathcal P} $.
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230144.png" />-complete languages can be visualized to represent the hardest problems in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230145.png" />. Moreover, to settle the question of whether or not <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230146.png" />, it suffices to decide whether or not an arbitrary <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230147.png" />-complete language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230148.png" /> is in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230149.png" />. Indeed, consider such a language <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230150.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230151.png" /> is not in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230152.png" />, then clearly <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230153.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230154.png" /> is in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230155.png" />, then the definition of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230156.png" />-completeness shows that every language in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230157.png" /> is also in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230158.png" />. But this means that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230159.png" />.
+
$  {\mathcal N} {\mathcal P} $-
 +
complete languages can be visualized to represent the hardest problems in $  {\mathcal N} {\mathcal P} $.  
 +
Moreover, to settle the question of whether or not $  {\mathcal P} = {\mathcal N} {\mathcal P} $,  
 +
it suffices to decide whether or not an arbitrary $  {\mathcal N} {\mathcal P} $-
 +
complete language $  L $
 +
is in $  {\mathcal P} $.  
 +
Indeed, consider such a language $  L $.  
 +
If $  L $
 +
is not in $  {\mathcal P} $,  
 +
then clearly $  {\mathcal P} \neq {\mathcal N} {\mathcal P} $.  
 +
If $  L $
 +
is in $  {\mathcal P} $,  
 +
then the definition of $  {\mathcal N} {\mathcal P} $-
 +
completeness shows that every language in $  {\mathcal N} {\mathcal P} $
 +
is also in $  {\mathcal P} $.  
 +
But this means that $  {\mathcal P} = {\mathcal N} {\mathcal P} $.
  
The study of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230160.png" />-completeness was initiated in [[#References|[a2]]] and [[#References|[a4]]], whereas [[#References|[a3]]] is a pioneering paper on complexity theory as a whole. One is referred to [[#References|[a6]]] for a basic list of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230161.png" />-complete problems. Typical ones are: the traveling sales-person problem, the satisfialility problem for propositional calculus, the Hamiltonian circuit problem for graphs, the knapsack problem, the scheduling problem, and the membership problem for TOL languages (cf. [[L-systems|<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230162.png" />-systems]]).
+
The study of $  {\mathcal N} {\mathcal P} $-
 +
completeness was initiated in [[#References|[a2]]] and [[#References|[a4]]], whereas [[#References|[a3]]] is a pioneering paper on complexity theory as a whole. One is referred to [[#References|[a6]]] for a basic list of $  {\mathcal N} {\mathcal P} $-
 +
complete problems. Typical ones are: the traveling sales-person problem, the satisfialility problem for propositional calculus, the Hamiltonian circuit problem for graphs, the knapsack problem, the scheduling problem, and the membership problem for TOL languages (cf. [[L-systems| $  L $-
 +
systems]]).
  
Although a problem is decidable, it might still be intractable in the sense that any algorithm for solving it must use unmanageable amounts of computational resources. Intractability is defined formally by the condition of being outside <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230163.png" />.
+
Although a problem is decidable, it might still be intractable in the sense that any algorithm for solving it must use unmanageable amounts of computational resources. Intractability is defined formally by the condition of being outside $  {\mathcal P} $.
  
It has also been observed that, for very many important problems, all algorithms presently known operate in exponential time, whereas it has not been shown that these problems actually are intractable. All <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230164.png" />-complete problems belong to this class. Indeed, establishing lower bounds is, in general, a difficult task in complexity theory. The complexity of a particular algorithm can usually be estimated, but it is harder to say something general about all algorithms for a particular problem — for instance, to give a lower bound for their complexity. Results such as the speedup theorem show that this might even be impossible in some cases. According to the theory of abstract complexity measures, one cannot expect to prove that any specific problem is intrinsically difficult in all measures. However, there are also a number of problems known to be intractable.
+
It has also been observed that, for very many important problems, all algorithms presently known operate in exponential time, whereas it has not been shown that these problems actually are intractable. All $  {\mathcal N} {\mathcal P} $-
 +
complete problems belong to this class. Indeed, establishing lower bounds is, in general, a difficult task in complexity theory. The complexity of a particular algorithm can usually be estimated, but it is harder to say something general about all algorithms for a particular problem — for instance, to give a lower bound for their complexity. Results such as the speedup theorem show that this might even be impossible in some cases. According to the theory of abstract complexity measures, one cannot expect to prove that any specific problem is intrinsically difficult in all measures. However, there are also a number of problems known to be intractable.
  
Space bounds are defined for Turing machines analogously as time bounds. The denotations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230165.png" />-SPACE and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230166.png" />-SPACE stand for languages acceptable by deterministic and non-deterministic Turing machines in polynomial space, respectively. The following relations hold:
+
Space bounds are defined for Turing machines analogously as time bounds. The denotations $  {\mathcal P} $-
 +
SPACE and $  {\mathcal N} {\mathcal P} $-
 +
SPACE stand for languages acceptable by deterministic and non-deterministic Turing machines in polynomial space, respectively. The following relations hold:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c024/c024230/c024230167.png" /></td> </tr></table>
+
$$
 +
{\mathcal P}  \subseteq  {\mathcal N} {\mathcal P}  \subseteq \
 +
{\mathcal P}- \textrm{ SPACE }  = \
 +
{\mathcal N} {\mathcal P}- \textrm{ SPACE } .
 +
$$
  
 
It is a celebrated open problem whether or not the inclusions are strict.
 
It is a celebrated open problem whether or not the inclusions are strict.

Latest revision as of 17:46, 4 June 2020


The classification of mathematical problems into decidable and undecidable ones is a most fundamental one. It is also of definite practical significance in discouraging attempts to design too-general systems, such as systems for deciding the equivalence of programs or the halting of a program.

However, this classification is too coarse in many respects. A finer classification of decidable problems in terms of their complexity is discussed below. Two problems might both be decidable and yet one might be enormously more difficult to compute, which in practice might make this problem resemble an undecidable one. For example, if instances of reasonable size of one problem take only a few seconds to compute, whereas instances of comparable size of the other problem take millions of years (even if best computers are available), it is clear that the latter problem should be considered intractable compared with the former one. Hence, having established the decidability of a particular problem, one should definitely study its complexity — that is, how difficult it is to settle specific instances of the problem. Complexity considerations are of crucial importance, for instance, in cryptography. If one is not able to decrypt a message within a certain time limit, one might as well forget the whole thing because, as time passes by, the situation might change entirely. Related material is discussed also in the article Algorithm, computational complexity of an.

Thus, typical questions in the theory of computational complexity would be: Is the algorithm $ A _ {1} $ better than the algorithm $ A _ {2} $( for the same problem) in the sense that it uses fewer resources, such as time or memory space? Does the problem $ P $ have a best algorithm? Is the problem $ P $ more difficult than the problem $ P _ {1} $ in the sense that every algorithm for solving $ P $ is more complex than some fixed reasonable algorithm for solving $ P _ {1} $?

Natural complexity measures are obtained by considering Turing machines (cf. Turing machine): How many steps (in terms of the length of the input) does a computation require? This is a natural formalization of the notion of time resource. Similarly, the number of squares visited during a computation constitutes a natural space measure. (Cf. Formal languages and automata and Computable function.)

Such machine-oriented complexity considerations will be considered in the second half of this article. The discussion will be restricted to the basic Turing-machine model. Variations such as machines with many tapes and many heads or random-access machines are important in more detailed and specific complexity considerations. However, from the point of view of the most fundamental issues, the particular Turing-machine model chosen is irrelevant.

In the first half of this article, axiomatic complexity theory is being studied. No specific complexity measure, such as time or memory space, will be defined. Instead, "abstract resource" used by an algorithm will be the term employed. The axioms applied are very natural. They also look very weak in the sense that they do not say much. However, quite a remarkable theory based on the axioms can be established. The theory was initial in [a1].

The third major aspect of complexity theory is not discussed at all: low-level complexity, or the complexity of some specific but practically important algorithms and problems. One is referred to [a5][a7] for this topic, as well as for more detailed information of the broad and highly developed area of complexity theory in general.

Consider an enumeration

$$ \tag{a1 } TM _ {1} \dots TM _ {i} , . . . $$

of all Turing machines. Enumeration (a1) determines an enumeration

$$ \tag{a2 } \overline{f}\; _ {1} \dots \overline{f}\; _ {i} , . . . $$

of all partial recursive functions of one variable (cf. Partial recursive function). An enumeration

$$ \tag{a3 } f _ {1} \dots f _ {i} , . . . $$

of all partial recursive functions of one variable is termed acceptable if and only if it is possible to go effectively from (a2) to (a3), and vice versa. In other words, given an index for a function in (a2), one can find an index for the same function in (a3), and vice versa.

A complexity measure is a pair $ CM = ( F, \Phi ) $, where $ F $ is an acceptable enumeration (a3) of partial recursive functions and $ \Phi $ is an infinite sequence

$$ \tag{a4 } \phi _ {1} \dots \phi _ {i} , . . . $$

of partial recursive functions such that the Blum axioms B1 and B2 are satisfied.

B1. For each $ i \geq 1 $, the domains of $ f _ {i} $ and $ \phi _ {i} $ coincide.

B2. The function $ g ( i, n, m) $ defined by

$$ g ( i, n, m) = \ \left \{ \begin{array}{cl} 1 & \textrm{ if } \phi _ {i} ( n) = m, \\ 0 & \textrm{ otherwise } , \\ \end{array} \right .$$

is recursive.

The function $ \phi _ {i} $ is referred to as the complexity function, or cost function, of $ f _ {i} $.

For instance, if one chooses $ \phi _ {i} ( n) $ to be the number of steps in the computation of a Turing machine for $ f _ {i} $ for the input $ n $, one clearly obtains a complexity measure. Similarly, a complexity measure results if one lets $ \phi _ {i} ( n) $ be the number of squares visited during the computation of the same Turing machine for the input $ n $, provided such a variant of Turing machines is considered where no machines loops using only a finite amount of tape.

On the other hand, the choice $ \phi _ {i} = f _ {i} $ does not yield a complexity measure: Axiom B2 is not satisfied. The choice $ \phi _ {i} ( n) = 1 $ for all $ i $ and $ n $ does not yield a complexity measure because axiom B1 is not satisfied. These examples also show that the two axioms are independent.

Clearly, every partial recursive function $ f $ occurs infinitely many times in the sequence (a3). If $ f = f _ {i} $, then the cost function $ \phi _ {i} $ is associated to an algorithm (for instance, a Turing machine) for computing $ f $, rather than to the function $ f $ itself. If $ f = f _ {j} $ as well, then the cost function $ \phi _ {j} $ might have essentially smaller values than $ \phi _ {i} $, showing that the algorithm corresponding to $ f _ {j} $ is essentially better. When and how is such a speedup possible? This question of speedup is one of the most interesting ones in complexity theory.

No matter what complexity measure and amount of speedup one considers, there are recursive functions $ f $ such that an arbitrary algorithm for $ f $ can be sped up by that amount. Suppose $ f = f _ {i} $ and consider the algorithm determined by $ f _ {i} $( for instance, the Turing machine $ TM _ {i} $) to be a particularly efficient one. This means that one regards the amount of resource defined by $ \phi _ {i} $ to be particularly small, in view of the complexity of the function $ f $. Suppose, further, that $ h ( x) $ is a very rapidly increasing function, for instance,

$$ h ( x) = \ 2 ^ {2 ^ {2 ^ {2 ^ {x} } } } . $$

Then there is another algorithm for $ f $ using so much less resource as $ h $ indicates. In other words, one has also $ f = f _ {j} $ and

$$ \phi _ {i} ( x) \geq \ 2 ^ {2 ^ {2 ^ {2 ^ {\phi _ {j} ( x) } } } } . $$

However, this does not necessarily hold for all $ x $, but only almost-everywhere. Of course, the same procedure can be repeated for $ f _ {j} $. This gives rise to another speedup, but now $ f _ {j} $ is the algorithm using "much" resource. The speedup can be repeated again for the resulting function, etc. ad infinitum.

Thus, the speedup theorem implies that some functions $ f $ have no best algorithms: For every person's favourite algorithm $ f _ {i} $, a speedup is possible. However, this is only true for some functions $ f $ that might be considered "unnatural" .

Now machine-oriented complexity theory is considered. Consider a Turing machine $ TM $ that halts with all inputs $ w $. The time-complexity function associated with $ TM $ is defined by

$$ t _ {TM} ( n) = \ \max \{ m :\ TM \textrm{ halts } \textrm{ after } \ m \textrm{ steps } \textrm{ with } \ \textrm{ an } $$

$$ {} \textrm{ input } w \textrm{ such } \textrm{ that } | w | = n \} . $$

Thus $ t _ {TM} ( n) $ maps the set of non-negative integers into itself. One says that $ TM $ is polynomial bounded if and only if there is a polynomial $ P ( n) $ such that $ t _ {TM} ( n) \leq P ( n) $ holds for all $ n $. Denote by $ {\mathcal P} $ the family of languages acceptable by polynomially-bounded Turing machines. (So far only deterministic Turing machines have been considered — non-deterministic ones are introduced later.)

Although $ {\mathcal P} $ is defined as a family of languages, it can be visualized as the collection of problems for which there exists an algorithm operating in polynomial time. One can always identify a decision problem with the membership problem for the language of "positive instances" .

The family $ {\mathcal P} $ is very natural from a mathematical point of view. This is seen from the fact that it is highly invariant with respect to the underlying model of computation. For instance, Turing machines $ TM _ {1} $ with several tapes are faster than ordinary Turing machines — that is, their time-complexity function assumes smaller values. However, if the time-complexity function of such a $ TM _ {1} $ is bounded from above by a polynomial $ P _ {1} ( n) $, one can (effectively) construct an ordinary Turing machine $ TM $ with a polynomial bound $ P ( n) $ accepting the same language as $ TM _ {1} $. (In general, $ P ( n) $ assumes greater values than $ P _ {1} ( n) $ but is still a polynomial.) Similarly, every language that is polynomially bounded with respect to any reasonable model of computation belongs to the family $ {\mathcal P} $, as defined above with respect to the ordinary Turing machine.

The family $ {\mathcal P} $ is also of crucial importance because languages outside $ {\mathcal P} $ can be visualized as impossible to compute. In fact, one says that a recursive language is intractable if it does not belong to $ {\mathcal P} $.

Clearly, languages outside $ {\mathcal P} $ are intractable from the practical point of view. The same can be said about such languages in $ {\mathcal P} $ for which the polynomial bound is a huge one. However, it would not be natural to draw the borderline between tractability and intractability somewhere inside $ {\mathcal P} $. Such a definition would also be time-varying: Drastic developments in computers could change it. On the other hand, the family $ {\mathcal P} $ provides a very natural characterization of tractability.

Now non-deterministic Turing machines are considered: When scanning a specific symbol in a specific state, the machine may have several possibilities for its behaviour. Otherwise, a non-deterministic machine is defined as a deterministic one. A word $ w $ is accepted if and only if it gives rise to an accepting computation, independently of the fact that it might also give rise to computations leading to failure. Thus, in connection with non-deterministic machines in general, all roads to failure are disregarded if there is one possible road to success.

The time required by a non-deterministic Turing machine $ TM $ to accept a word $ w \in L ( TM) $ is defined to be the number of steps in the shortest computation of $ TM $ accepting $ w $. The time-complexity function of $ TM $ is now defined by

$$ t _ {TM} ( n) = \ \max \{ 1, m :\ TM \textrm{ accepts } \mathop{\rm in} \ \textrm{ time } m $$

$$ {} \textrm{ some } w \textrm{ where } | w | = n \} . $$

Thus, only accepting computations enter the definition of $ t _ {TM} ( n) $. If no words of length $ n $ are accepted, $ t _ {TM} ( n) = 1 $.

Further formal details of the definition of a non-deterministic Turing machine are omitted.

Having defined the time-complexity function, the notion of a polynomially-bounded non-deterministic Turing machine is defined exactly as before. Denote by $ {\mathcal N} {\mathcal P} $ the family of languages acceptable by non-deterministic polynomially-bounded Turing machines.

Problems in $ {\mathcal P} $ are tractable, whereas problems in $ {\mathcal N} {\mathcal P} $ have the property that it is tractable to check whether or not a good guess for the solution of the problem is correct. A non-deterministic Turing machine may be visualized as a device that checks whether or not a guess is correct: It makes a guess (or several guesses) at some stage during its computation, and the final outcome is acceptance only in case the guess was (or the guesses were) correct. Thus, a time bound for a non-deterministic Turing machine is, in fact, a time bound for checking whether or not a guess for the solution is correct.

Clearly, $ {\mathcal P} $ is contained in $ {\mathcal N} {\mathcal P} $. However, it is not known whether or not the containment is proper. The problem of whether or not $ {\mathcal P} $ equals $ {\mathcal N} {\mathcal P} $( $ {\mathcal P} = {\mathcal N} {\mathcal P} $?) can justly be called the most celebrated open problem in the theory of computation. The significance of this question is due to the fact that many practically important problems are known to be in $ {\mathcal N} {\mathcal P} $, whereas it is not known whether or not they are in $ {\mathcal P} $. In fact, all known deterministic algorithms for these problems are exponential as far as time is concerned. Thus, a proof of $ {\mathcal P} = {\mathcal N} {\mathcal P} $ would make all of these problems tractable.

Non-deterministic Turing machines and guessing are not, as such, intended to be modeling computation. Non-determinism is merely an auxiliary notion and, as will be seen, a very useful one. Indeed, if one wants to settle the question of whether or not $ {\mathcal P} = {\mathcal N} {\mathcal P} $, the following definitions and results show that it suffices to consider one particular language (might be a favourite one!) and determine whether or not it is in $ {\mathcal P} $. There is a great variety of such so-called $ {\mathcal N} {\mathcal P} $- complete languages (see below), resulting from practically all areas of mathematics.

A language $ L _ {1} \subseteq \Sigma _ {1} ^ {*} $ is polynomially reducible to a language $ L _ {2} \subseteq \Sigma _ {2} ^ {*} $— in symbols $ L _ {1} \leq _ {p} L _ {2} $— if and only if there is a deterministic polynomially-bounded Turing machine $ TM $ translating words $ w _ {1} $ over $ \Sigma _ {1} $ into words $ w _ {2} $ over $ \Sigma _ {2} $ in such a way that $ w _ {1} $ being in $ L _ {1} $ is equivalent to its image $ w _ {2} $ being in $ L _ {2} $.

Observe that the Turing machine $ TM $ introduced in the previous definition must halt with all inputs — this is a consequence of $ TM $ being deterministic and polynomially bounded.

Clearly, whenever $ L _ {1} \leq _ {p} L _ {2} $ and $ L _ {2} $ is in $ {\mathcal P} $, then $ L _ {1} $ is also in $ {\mathcal P} $.

A language $ L $ is called $ {\mathcal N} {\mathcal P} $- hard if $ L ^ \prime \leq _ {p} L $ holds for every language $ L ^ \prime $ in $ {\mathcal N} {\mathcal P} $. $ L $ is termed $ {\mathcal N} {\mathcal P} $- complete if it is $ {\mathcal N} {\mathcal P} $- hard and, in addition, belongs to $ {\mathcal N} {\mathcal P} $.

$ {\mathcal N} {\mathcal P} $- complete languages can be visualized to represent the hardest problems in $ {\mathcal N} {\mathcal P} $. Moreover, to settle the question of whether or not $ {\mathcal P} = {\mathcal N} {\mathcal P} $, it suffices to decide whether or not an arbitrary $ {\mathcal N} {\mathcal P} $- complete language $ L $ is in $ {\mathcal P} $. Indeed, consider such a language $ L $. If $ L $ is not in $ {\mathcal P} $, then clearly $ {\mathcal P} \neq {\mathcal N} {\mathcal P} $. If $ L $ is in $ {\mathcal P} $, then the definition of $ {\mathcal N} {\mathcal P} $- completeness shows that every language in $ {\mathcal N} {\mathcal P} $ is also in $ {\mathcal P} $. But this means that $ {\mathcal P} = {\mathcal N} {\mathcal P} $.

The study of $ {\mathcal N} {\mathcal P} $- completeness was initiated in [a2] and [a4], whereas [a3] is a pioneering paper on complexity theory as a whole. One is referred to [a6] for a basic list of $ {\mathcal N} {\mathcal P} $- complete problems. Typical ones are: the traveling sales-person problem, the satisfialility problem for propositional calculus, the Hamiltonian circuit problem for graphs, the knapsack problem, the scheduling problem, and the membership problem for TOL languages (cf. $ L $- systems).

Although a problem is decidable, it might still be intractable in the sense that any algorithm for solving it must use unmanageable amounts of computational resources. Intractability is defined formally by the condition of being outside $ {\mathcal P} $.

It has also been observed that, for very many important problems, all algorithms presently known operate in exponential time, whereas it has not been shown that these problems actually are intractable. All $ {\mathcal N} {\mathcal P} $- complete problems belong to this class. Indeed, establishing lower bounds is, in general, a difficult task in complexity theory. The complexity of a particular algorithm can usually be estimated, but it is harder to say something general about all algorithms for a particular problem — for instance, to give a lower bound for their complexity. Results such as the speedup theorem show that this might even be impossible in some cases. According to the theory of abstract complexity measures, one cannot expect to prove that any specific problem is intrinsically difficult in all measures. However, there are also a number of problems known to be intractable.

Space bounds are defined for Turing machines analogously as time bounds. The denotations $ {\mathcal P} $- SPACE and $ {\mathcal N} {\mathcal P} $- SPACE stand for languages acceptable by deterministic and non-deterministic Turing machines in polynomial space, respectively. The following relations hold:

$$ {\mathcal P} \subseteq {\mathcal N} {\mathcal P} \subseteq \ {\mathcal P}- \textrm{ SPACE } = \ {\mathcal N} {\mathcal P}- \textrm{ SPACE } . $$

It is a celebrated open problem whether or not the inclusions are strict.

References

[a1] M. Blum, "A machine-independent theory of the complexity of recursive functions" J. ACM , 14 (1967) pp. 322–336
[a2] S.A. Cook, "The complexity of theorem-proving procedures" , Proc. 3-rd Annual ACM Symp. Theory of Computing , ACM (1971) pp. 151–158
[a3] J. Hartmanis, R.E. Stearns, "On the computational complexity of algorithms" Trans. Amer. Math. Soc. , 117 (1965) pp. 285–306
[a4] R.M. Karp, "Reducibility among combinatorial problems" R.E. Miller (ed.) J.W. Tatcher (ed.) , Complexity of Computer Computations. Proc. Symp. IBM , Plenum (85–103)
[a5] A.V. Aho, J.E. Hopcroft, J.D. Ullman, "The design and analysis of computer algorithms" , Addison-Wesley (1974)
[a6] M.R. Garey, D.S. Johnson, "Computers and intractability: a guide to the theory of NP-completeness" , Freeman (1979)
[a7] M. Machtey, P. Young, "An introduction to the general theroy of algorithms" , North-Holland (1978)
[a8] A. Salomaa, "Computation and automata" , Cambridge Univ. Press (1986)
How to Cite This Entry:
Complexity theory. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Complexity_theory&oldid=46434
This article was adapted from an original article by G. RozenbergA. Salomaa (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article