Difference between revisions of "Algorithm"
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
m (correction) |
||
(One intermediate revision by one other user not shown) | |||
Line 69: | Line 69: | ||
\begin{array}{rrrr} | \begin{array}{rrrr} | ||
307 &3 \dot{0} 7 &\dot{3} \dot{0} 7 &\dot{3} \dot{0} 7 \\ | 307 &3 \dot{0} 7 &\dot{3} \dot{0} 7 &\dot{3} \dot{0} 7 \\ | ||
− | {- 49 } | + | \underline{- 49 } &\underline{- 49 } &\underline{- 49 } &\underline{- 49 } \\ |
{} & 8 &58 &258 \\ | {} & 8 &58 &258 \\ | ||
\end{array} | \end{array} | ||
− | + | ||
$$ | $$ | ||
Line 84: | Line 84: | ||
\begin{array}{r} | \begin{array}{r} | ||
p \\ | p \\ | ||
− | {- q } | + | \underline{- q } \\ |
r \\ | r \\ | ||
\end{array} | \end{array} | ||
Line 104: | Line 104: | ||
(cf. [[Normal algorithm|Normal algorithm]]) and by A.N. Kolmogorov , , who based the approach on constructive topological complexes of a certain type, which makes it possible to express more precisely the property of "being local" of a transformation. In each of these formalizations the fundamental hypothesis is in satisfactory agreement with practice. Another argument in favour of this hypothesis is the fact that, as can be demonstrated, all the formal versions (of effective computability) which have been proposed are equivalent. | (cf. [[Normal algorithm|Normal algorithm]]) and by A.N. Kolmogorov , , who based the approach on constructive topological complexes of a certain type, which makes it possible to express more precisely the property of "being local" of a transformation. In each of these formalizations the fundamental hypothesis is in satisfactory agreement with practice. Another argument in favour of this hypothesis is the fact that, as can be demonstrated, all the formal versions (of effective computability) which have been proposed are equivalent. | ||
− | + | As an example, consider the formalization proposed by Turing. In its original form, this was a description of some theoretical computer consisting of 1) an infinite tape, divided into consecutive cells, each cell capable of containing some symbol out of the "tape alphabet" of the machine; and 2) a "finite control," which at each moment of time is in some "state" (out of a given finite list of states). The finite control can move along the tape, one cell at a time, and change the contents of the cells it passes. A program which monitors the motion of the finite control constitutes the algorithm for the computations conducted on such a machine (a "Turing algorithm" ). For a more exact and more detailed description see [[Turing machine|Turing machine]]. Here a modernized exposition of Turing's construction is given. To define a Turing algorithm, one specifies a) pairwise disjoint alphabets $ B, D, C $, | |
with a distinguished letter $ \lambda $ | with a distinguished letter $ \lambda $ | ||
in $ D $ | in $ D $ |
Latest revision as of 19:10, 11 December 2020
Detailed instructions defining a computational process (which is then said to be algorithmic), which begins with an arbitrary input (out of a certain number of inputs which are possible for the given algorithm), and with instructions aimed at obtaining a result (or output) which is fully determined by the input. For instance, the rules taught in elementary schools for column-wise addition, subtraction, multiplication and division are algorithms; in these algorithms the possible results are non-negative integers written in the decimal system, while the possible inputs are ordered pairs of such numbers. It is, in general, not assumed that the result is necessarily obtained: the process of applying an algorithm to some possible input (i.e. an algorithmic process which proceeds starting from this input) may also terminate without a result being obtained (in such a case one has a no-result stop) or may not terminate at all. If the process terminates (does not terminate), then one says that the input is suitable (is not suitable) for the algorithm in question.
An important result in this area is the undecidability of the so-called halting problem. It is possible to construct an algorithm $ \alpha $ such that there is no algorithm to decide whether or not some arbitrary input which is possible for $ \alpha $ is also suitable for $ \alpha $. In particular, one can find such an algorithm $ \alpha $ for which the set of its possible inputs is the set of natural numbers.
The notion of an algorithm is one of the central concepts in modern mathematics, especially in the area of computations. An example is given below. Finding the numerical solution of equations of a given type is tantamount to constructing an algorithm which converts an arbitrary equation of this type and an arbitrary positive rational number $ \epsilon $ into a number (or a set of numbers) which differs from the root(s) of this equation by less than $ \epsilon $. However, the term "computational process" , as used in the context of an algorithm, must not be understood to mean merely numerical calculations: even in elementary algebra computations with letters are used. Standard arithmetic calculations involve some symbols which are not numbers (parentheses, equality signs, symbols for arithmetic operators). It is accordingly expedient to consider algorithms involving arbitrary symbols and objects composed from symbols. The simplest example of such an object is a linear sequence of symbols forming a word. It is also possible to consider "non-linear" objects — algebraic matrices, derivation trees of some formal language and general graphs. Intuitively, the least requirement on inputs and results of algorithms is that they must be constructive objects (cf. Constructive object). Thus, the concept of an algorithm is very general. One may speak of an algorithm for the translation from one language into another, of an algorithm of an air traffic controller (processing information on the motions of aircraft according to fixed rules), and other examples of algorithmically described control processes. Consequently, the concept of an algorithm is one of the central ideas in cybernetics and computer science.
Example of an algorithm.
Let the possible inputs and the possible results be all possible words over the alphabet $ \{ a, b \} $. The transition from a word $ X $ to a word $ Y $ is "permissible" in the following two cases (where $ P $ denotes an arbitrary word): 1) $ X $ is of the form $ aP $, and $ Y $ is of the form $ Pb $; or 2) $ X $ is of the form $ baP $, while $ Y $ is of the form $ Paba $. The instructions are formulated as follows: "designating some word as the input, execute the permitted transitions until a word of the form aaP is obtained; then halt and let word P be the result" . These instructions constitute an algorithm, which will be denoted by $ \mathfrak A $. Take the word $ babaa $ as the input. After one transition one obtains $ baaaba $; after the second, one obtains $ aabaaba $. Here the algorithm halts with $ baaba $ as result. Consider another input $ baaba $. One obtains, in succession, $ abaaba, baabab, abababa, bababab, babababa , . . . $. It can be shown that this process will never end (i.e. none of the generated words will ever begin with $ aa $). Now take $ abaab $ as input. One obtains $ baabb, abbaba, bbabab $; no further permissible transitions are possible, and, at the same time, the termination condition is not satisfied. One has a no-result stop. Thus, input $ babaa $ is suitable for $ \mathfrak A $, and inputs $ baaba $ and $ abaab $ are not.
The significance of algorithms.
In science, algorithms are encountered everywhere: a "general" solution to a problem requires an algorithm. The ability of man to add numbers is an example of this. Not the fact that he can find, sooner or later, the sum of any two numbers, but rather in the sense that he possesses a method for the unique addition of any two numbers given in written form, i.e. an addition algorithm such as the well-known column-wise addition method. "Generally" solving a problem is formalized as solving an algorithmic problem. An algorithmic problem consists of separate problem instances, and a single algorithm needs to be found for the solution of all of them. If there is no such algorithm, one says that the algorithmic problem is unsolvable. Thus, the problem of numerically solving equations of a given type and the problem of automatic translation are algorithmic problems. Their problem instances are, respectively, to find the numerical solution of individual equations of the given type, and the translation of individual phrases. Other examples are the verification of algebraic equalities in algebra; the algorithmic problem of recognition of the deducibility of propositions from given axioms in mathematical logic. (In mathematical logic the concept of an algorithm is also important since it serves as the basis of the key concept of a calculus, which is a generalization and a more precise form of the intuitive concepts of "proof" and "demonstration" .) The establishment of the unsolvability of a given algorithmic problem (e.g. the problem of establishing the truth or demonstrability of sentences in some logical-mathematical language) is important, because it shows that, in principle, specific problem instances of that problem can only be solved by methods which are specific to each individual problem instance.
The scientific importance of the concept of an algorithm has been recognized for a long time. Since the earliest times, people have looked for constructive methods to solve mathematical problems. Such efforts usually benefitted from the new availability of suitable symbolic notations. This process has made a significant contribution to scientific knowledge, especially so after it had become clear that some problems are inherently unsolvable (squaring the circle, etc.). After it had been recognized that certain problems cannot be solved by direct calculations, the theoretical concept of a set was born in the 19th century. It is only after a period of rapid development of this concept (which did not involve problems of constructive methods in their modern sense), that it was possible in the mid-20th century to return to constructive problems, but this was done on a different level, enriched by the concept of an algorithm which had crystallized in the meantime. This concept forms the base of the constructive trend in mathematics.
The word "algorithm" comes from the word "algoritmi" , the latter being the Latin transliteration of the name of the 9th century Arab mathematician Al-Khwarizmi. An "algorithm" in the Europe of the Middle Ages was "the decimal-positional system and the art of calculating with it" , since it is through the Latin translation (12th century) of the treatise of Al-Khwarizmi that the positional system was introduced in Europe.
The structure of an algorithmic process.
An algorithmic process is a process of consecutive conversion of constructive objects by discrete "steps" , each step consisting of the replacement of one constructive object by another one. Thus, when applying the algorithm $ \mathfrak A $ to the word $ baaba $, there follow, in succession, the words $ baaba, abaaba, baabab $, etc. If one applies, say, the algorithm of column-wise subtraction to the pair of numbers <307, 49 >, then one obtains, successively, the following constructive objects:
$$ \begin{array}{rrrr} 307 &3 \dot{0} 7 &\dot{3} \dot{0} 7 &\dot{3} \dot{0} 7 \\ \underline{- 49 } &\underline{- 49 } &\underline{- 49 } &\underline{- 49 } \\ {} & 8 &58 &258 \\ \end{array} $$
It will be noted that, in a series of successive constructive objects, each successive constructive object is fully determined (in the framework of the given algorithm) by the constructive object immediately preceding it. In the more rigorous approach, it is also assumed that the transition from any constructive object to the one immediately following it is sufficiently "elementary" — in the sense that the one-step transformation of the preceding into the immediately succeeding object is local (it is not the entire constructive object which is transformed, but only its algorithm-limited part; moreover, this transformation itself is not determined by the complete preceding object, but only by a limited part of it).
Accordingly, one has not only a set of possible inputs and a set of possible results, but also a set of possible intermediate results, representing the working medium in which the algorithmic process is evolving. Regarding $ \mathfrak A $, all these three sets coincide, but this is not true of the algorithm of column-wise subtraction — the possible inputs are pairs of numbers, the possible results are numbers (all in the decimal system), while the possible intermediate results are "three-storey" records of the type
$$ \begin{array}{r} p \\ \underline{- q } \\ r \\ \end{array} $$
where $ q $ is the written record of the number in the decimal system, $ r $ is a similar record or the empty word, while $ p $ is the written record of a number in the decimal system, with dots over some of the digits. As a rule, several (not independent) parameters which are characteristic of an algorithm may be distinguished: 1) the set of possible inputs; 2) the set of possible results; 3) the set of possible intermediate results; 4) the starting rule; 5) the direct transition rule; 6) the termination rule; and 7) the rule for the retrieval of the result.
Formalizing the concept of an algorithm.
The concept of an algorithm in its general form is a fundamental mathematical concept which cannot be defined in terms of simpler concepts. Strictly speaking, formalizations of the concept of an algorithm actually restrict it somewhat. In each such formalization an exact definition is given of some class within which each one of the parameters may vary. The various formalizations differ from each other by the selection of such classes. Since the parameters unambiguously define some algorithm, a choice of parameter domains determines some class of algorithms. However, such a choice may claim to be a formalization of the intuitive concept of algorithm, only if one can be sure that for any "intuitive" algorithm there exists an equivalent algorithm in the class of algorithms defined by the choice under consideration. This requirement is formulated for each formalization as a fundamental hypothesis, which, in the present state of knowledge, cannot be mathematically demonstrated.
The first formalizations of this type are due to E.L. Post
and to A.M. Turing , , whose constructions anticipated in many respects the ideas on which modern computers are based. There are also versions formulated by A.A. Markov ,
(cf. Normal algorithm) and by A.N. Kolmogorov , , who based the approach on constructive topological complexes of a certain type, which makes it possible to express more precisely the property of "being local" of a transformation. In each of these formalizations the fundamental hypothesis is in satisfactory agreement with practice. Another argument in favour of this hypothesis is the fact that, as can be demonstrated, all the formal versions (of effective computability) which have been proposed are equivalent.
As an example, consider the formalization proposed by Turing. In its original form, this was a description of some theoretical computer consisting of 1) an infinite tape, divided into consecutive cells, each cell capable of containing some symbol out of the "tape alphabet" of the machine; and 2) a "finite control," which at each moment of time is in some "state" (out of a given finite list of states). The finite control can move along the tape, one cell at a time, and change the contents of the cells it passes. A program which monitors the motion of the finite control constitutes the algorithm for the computations conducted on such a machine (a "Turing algorithm" ). For a more exact and more detailed description see Turing machine. Here a modernized exposition of Turing's construction is given. To define a Turing algorithm, one specifies a) pairwise disjoint alphabets $ B, D, C $, with a distinguished letter $ \lambda $ in $ D $ and distinguished letters $ \alpha $ and $ \omega $ in $ C $; and b) a set of pairs of the form $ \langle p \xi , \eta Tq \rangle $ where $ p, q \in C $, $ \xi , \eta \in B \cup D $, while $ T $ is one of the three symbols $ -, 0, + $; in this set (which is called a program) there are no two pairs with the same first coordinate. The possible inputs and the possible outputs are words over $ B $, while the possible intermediate results are words over $ B \cup D \cup C $ which contain not more than one letter of $ C $. The starting rule is that the initial word $ P $ is converted to the word $ \lambda \alpha P \lambda $. The termination rule is that an intermediate result which contains $ \omega $ is final. The rule for the retrieval of the output is that the output is the sequence of letters in the final intermediate result which follows $ \omega $ and precedes the first letter not belonging to $ B $. The transformation rule which converts $ A $ into $ A ^ \prime $ is as follows. The letter $ \lambda $ is written to the left and to the right of $ A $; in the word thus formed, the part of the form $ \epsilon p \xi $, where $ p \in C $, $ \epsilon \xi \in B \cup D $, is replaced by a word $ Q $ in accordance with the rule: a pair with the first term $ p \xi $ is sought in the program; let the second term of this pair be $ \eta Tq $; if $ T $ is $ - $, $ Q = q \epsilon \eta $; if $ T = 0 $, $ Q = \epsilon q \eta $; if $ T $ is $ + $, $ Q = \epsilon \eta q $. The word produced by this exchange is $ A ^ \prime $. For references, see , , , , , ,
Comments
The fundamental hypothesis mentioned above is usually called the Church–Turing thesis or Turing's hypothesis. This thesis states that what can be effectively computed according to one's intuition about what constitutes an effective process, can also be computed by a Turing machine, i.e. a formally defined notion of algorithm. By its very nature of identifying an intuitive informal notion with a formal precise definition, the Church–Turing thesis cannot be formally proved.
Many formalizations have turned out to be equivalent, and no formalization has turned out to be strictly more powerful. Apart from the formalizations mentioned above, the following references are the most standard ones in the West: Church's formalization of numbers and computable functions by lambda terms, Kleene's general recursive schemes, and the register machine model proposed by J.C. Shepherdson and H.E. Sturgis, which was extended into a more realistic computer model by S.A. Cook and R.A. Reckhow. Markov's and Kolmogorov's versions are less well known. For completeness the original references to Turing and Post are also given.
References
[a1a] | A.M. Turing, "On computable numbers, with an application to the Entscheidungsproblem" Proc. London Math.Soc. Ser. 2 , 42 (1936) pp. 230–265 |
[a1b] | A.M. Turing, "Corrections to "On computable numbers, with an application to the Entscheidungsproblem" " Proc. London Math.Soc. Ser. 2 , 43 (1937) pp. 544–546 |
[a2] | A. Church, "An unsolvable problem of elementary number theory" Amer. J. Math. , 58 (1936) pp. 345–363 |
[a3] | S.C. Kleene, "General recursive functions of natural numbers" Math. Ann. , 112 (1936) pp. 727–742 |
[a4] | E.L. Post, "Formal reductions of the general combinatorial decision problem" Amer. J. Math. , 65 (1943) pp. 197–268 |
[a5] | J.C. Shepherdson, H.E. Sturgis, "Computability of recursive functions" J. Assoc. Comp. Mach. , 10 (1963) pp. 217–255 |
[a6] | S.A. Cook, R.A. Reckhow, "Time-bounded random access machines" J. Computer and System Sciences , 7 (1973) pp. 354–375 |
Algorithm. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Algorithm&oldid=45073