From Encyclopedia of Mathematics
Jump to: navigation, search

A method of defining functions studied in the theory of algorithms and other branches of mathematical logic. This method has been used for a long time in arithmetic to define sequences of numbers (progressions, Fibonacci numbers, etc.). Recursion plays an important role in computational mathematics (recursive methods). Finally, in set theory transfinite recursion is often used.

For a long time the term "recursion" was used by mathematicians without being accurately defined. Its approximate intuitive sense can be described in the following way: The value of a sought function $ f $ at an arbitrary point $ \overline{x}\; $( by point is understood a tuple of values of arguments) is determined, generally speaking, by way of the values of this same function at other points $ \overline{y}\; $ that in a sense "precede" $ \overline{x}\; $. (The very word "recur" means "return" .) At certain "initial" points the values of $ f $ must of course be defined directly. Sometimes recursion is used to define several functions simultaneously; then the above-mentioned definitions are taken with a corresponding modification. Examples of different kinds of recursion will be given below. The relation "x1 precedes x2" (where $ \overline{x}\; _ {1} , \overline{x}\; _ {2} $ belong to the domain of the sought function) in various types of recursion ( "recursive schemes" ) may have a different sense. It must, however, be "well-founded recursionwell-founded" (i.e. there should not be an infinite sequence of points $ \overline{x}\; _ {n} $, $ n = 0, 1 \dots $ such that $ \overline{x}\; _ {n+} 1 $ precedes $ \overline{x}\; _ {n} $). Furthermore, it is implicitly understood that the relation is "sufficiently natural" (e.g. it is desirable that this relation be seen from the actual description of a recursive scheme and not from the process of its application). This last condition has a purely heuristic value (e.g. for determining any special, comparatively simple type of recursion). A more accurate definition of this condition is essentially inseparable from a more accurate description of the concept of recursion itself, and for this it is essential to establish which type of formal expressions can be acknowledged as recursive definitions.

Where it is a question of recursive descriptions of numerical functions (i.e. functions with natural arguments and natural values), it is usually implied that such descriptions define a way of computing the functions being determined. Here and in the sequel (except in the concluding remarks), the term "recursion" will be understood in precisely this sense. The simplest and most widely used recursive scheme is primitive recursion:

$$ f( 0, x _ {1} \dots x _ {n} ) = g( x _ {1} \dots x _ {n} ), $$

$$ f( y+ 1 , x _ {1} \dots x _ {n} ) = $$

$$ = \ h( y, f( y, x _ {1} \dots x _ {n} ), x _ {1} \dots x _ {n} ), $$

where the functions $ g $ and $ h $ are assumed to be known, $ f $ is the function to be determined, $ y $ is a variable according to which the recursion is conducted, and $ x _ {1} \dots x _ {n} $ are parameters not participating in the recursion. The closest generalization of this scheme is the so-called course-of-value recursion, which includes those types of recursive definitions in which, as in primitive recursion, only one variable participates in the recursion. The corresponding relation of precedence coincides with the normal ordering of the natural numbers (sometimes, however, this term is used in an even wider sense). The most typical form of course-of-value recursion is as follows:

$$ f( 0, x _ {1} \dots x _ {n} ) = g( x _ {1} \dots x _ {n} ), $$

$$ f( y+ 1 , x _ {1} \dots x _ {n} ) = $$

$$ = \ h( y, f( \alpha _ {1} ( y) , x _ {1} \dots x _ {n} ) \dots $$

$$ \dots {} f( \alpha _ {k} ( y) , x _ {1} \dots x _ {n} ) , x _ {1} \dots x _ {n} ), $$

where $ \alpha _ {i} ( y) \leq y $, $ i = 1 \dots k $. Mathematical logic often involves primitive recursive functions, i.e. functions that can be obtained after a finite number of steps using substitution and primitive recursion, starting from a specific fixed supply of basic functions (e.g. $ f( x) = x+ 1 $, $ f( x, y) = y $, etc.). A sequence of functional equalities that describes such a structure is called a primitive recursive description of the corresponding function. These descriptions are syntactic objects (i.e. chains of symbols) possessing a definite effectively recognizable structure. Practically all numerical functions used in mathematics for some concrete purpose prove to be primitive recursive. This largely explains the interest there is in this class of functions.

More complex types of recursive definitions are obtained when the recursion occurs simultaneously over several variables. These definitions, as a rule, lead out of the class of primitive recursive functions, although the corresponding relation of precedence may be completely natural. For example, the values of $ f( u, v) $ may participate in the definition of $ f( x, y) $, where $ u < x $ or $ u = x $, $ v < y $, as this occurs in the following scheme of double recursion:

$$ f( x, y) = g( x, y) \ \textrm{ if } x = 0 \textrm{ or } y = 0, $$

$$ f( x+ 1, y+ 1) = $$

$$ = \ h( x, y, f( x, \alpha ( x, y, f( x+ 1, y))), f( x+ 1, y)) . $$

This scheme has not yet been reduced to primitive recursion. On the other hand, a lot of more involved recursive definitions have been reduced to it (see [1]).

The partial types of recursion referred to have precise mathematical definitions, as opposed to the vague "near mathematical" ideas about "recursion in general" . The refinement of these ideas is naturally thought of as a search for a suitable algorithmic language (i.e. a formal language for describing computable procedures) that incorporates all the imaginable types of recursion without being too broad. One can rightly expect that the development of such a refinement may require further additional agreements, which bring something new to the intuitive understanding of recursion. In this connection it is not without interest to note that all the recursive schemes examined above are oriented towards the generation of total (everywhere defined) functions. Thus, if in a scheme of primitive recursion the functions $ g, h $ are total, then so is $ f $. In general, recursive definitions that define partial functions intuitively look rather unnatural. However, such recursions are brought into the analysis for a definite reason, connected with the so-called diagonal method. Thus, let there be given an algorithmic language $ L $ in which only total functions are defined, and let the syntactic structure of $ L $ not extend beyond the limits of an intuitively understood recursiveness. It is of course implied that expressions in the language $ L $( being descriptions of functions) are algorithmically recognizable and that there is a uniform method of computing functions that can be represented in $ L $, according to their descriptions. If expressions in $ L $ are regarded as being simply chains of symbols, then each of these chains can be considered as a representation of a natural number in a suitable number system, the "code" of the given expression. Now, let the function $ G( m, x) $ be defined as follows: If $ m $ is the code for describing (in $ L $) a certain one-place function $ f _ {m} $, then $ G( m, x) = f _ {m} ( x) $. Otherwise $ G( m, x) = \phi ( x) $, where $ \phi $ is some fixed function representable in $ L $. It is clear that $ G( m, x) $ is computable (cf. Computable function) and total, as is the function $ F( x) = G( x, x) + 1 $. But $ F $ is not expressible in $ L $, since for any $ m $ the equality $ F( m) = f _ {m} ( m) $ is impossible. (This argument makes considerable use of the totality of $ F $.) The question arises: Is the given description of $ F $ recursive? If $ L $ is taken to be the language of primitive recursive descriptions, then the description turns out to be reducible to a double recursion. Generally, the situation is unclear because of the vagueness in intuitive ideas about recursion, so that a positive answer to this question appears to be an additional agreement. If it is accepted (as is implicitly done in contemporary logic), then the existence of a language $ L $ that describes in detail all types of "total" recursion proves to be impossible.

Meanwhile, if partial functions can also be expressed in $ L $( and if their descriptions are acknowledged to be recursive), then diagonalization does not necessarily extend beyond $ L $, so that this language may be suitable for an adequate precization of recursiveness. It is true that this involves a certain re-thinking of the very concept of recursion, and the contemporary strict definition of this concept is not fully in line with previously held intuitive ideas. In the new refined sense, recursion is a specific syntactic operation (with a fixed interpretation) used to construct expressions in various algorithmic languages. If $ L $ is one of these languages, it is natural to assume that there are other syntactic operations in it on which the concrete type of recursive descriptions in $ L $ depends. Generally speaking, expressions in $ L $ do not necessarily describe only numerical functions. Several of them may define functional operators and other objects. This is necessary, in particular, for defining recursion. Recursive descriptions in $ L $ are systems of functional equations of the type $ f _ {i} = T _ {i} ( f _ {1} \dots f _ {n} ) $, $ i = 1 \dots n $, where $ f _ {i} $ are the functions being defined and the $ T _ {i} $ are expressions in $ L $ defining a specific operator in a collection. But all expressible operators in $ L $ must be effective (since $ L $ is an algorithmic language) and hence monotone (i.e. they preserve the relation $ \phi \prec \psi $, which means that $ \psi $ supplements the definition of $ \phi $). Because of this, every system of the given type has a minimal (in the sense of the relation $ \prec $) solution and, according to the definition, is a recursive description of the functions that make up this solution. Starting from the given description, the sought minimal solution can be obtained by means of the following process. For $ i = 1 \dots n $, let $ f _ {i} ^ { 0 } $ be a nowhere-defined function and let $ f _ {i} ^ {k+} 1 = T _ {i} ( f _ {1} ^ { k } \dots f _ {n} ^ { k } ) $ be such that $ f _ {i} ^ { k } \prec f _ {i} ^ { k+ 1 } $. It is easy to verify that the sought functions $ f _ {i} $ are obtained by "combination" of the $ f _ {i} ^ { k } $ $ ( k = 0, 1 , . . . ) $. Methods for a simultaneous computation of $ f _ {i} $ are defined in the same way. This process defines a more or less natural relation of precedence on arguments, which also justifies (to a satisfactory extent) the use of the term "recursion" in this connection.

In order to include the aforementioned recursive schemes in this definition, it is essential that the language $ L $ be not too poor. Thus, already in the case of primitive recursion, substitutions and a "piecewise" definition have been required (i.e. a definition of the function by several equalities). Furthermore, these two syntactic operations, in combination with the recursion just defined, are already sufficient to obtain all computable functions (starting from the basic ones, cf. Computable function). With appropriate assumptions about $ L $, one can confidently claim that the given definition of recursion includes all intuitively conceivable recursive descriptions. At the same time, the general definition given has the characteristic properties of an informally understood recursiveness that are acknowledged to be fundamental in modern mathematics.

The fruitfulness of the given definition of recursion consists not only in its significance in the theory of algorithms, but also in that it permits one to look from an "algorithmic" (in the generalized sense) point of view at several structures of abstract mathematics that have a definite similarity with "numerical" recursions (transfinite recursion, inductive definition, recursive hierarchies, etc.).


[1] R. Peter, "Recursive functions" , Acad. Press (1967) (Translated from German)
[2] A.I. Mal'tsev, "Algorithms and recursive functions" , Wolters-Noordhoff (1970) (Translated from Russian)
[3] V.A. Uspenskii, "Leçons sur les fonctions calculables" , Hermann (1966) (Translated from Russian)
[4] S.C. Kleene, "Introduction to metamathematics" , North-Holland (1951)
[5] Y.N. Moschovakis, "Elementary introduction on abstract structures" , North-Holland (1974)



[a1] H. Rogers jr., "Theory of recursive functions and effective computability" , McGraw-Hill (1967)
How to Cite This Entry:
Recursion. Encyclopedia of Mathematics. URL:
This article was adapted from an original article by N.V. Belyakin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article