Namespaces
Variants
Actions

Proof theory

From Encyclopedia of Mathematics
Jump to: navigation, search
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.


A branch of mathematical logic which deals with the concept of a proof in mathematics and with the applications of this concept in various branches of science and technology.

In the wide meaning of the term, a proof is a manner of justification of the validity of some given assertion. To what extent a proof is convincing will mainly depend on the means employed to substantiate the truth. Thus, in exact sciences certain conditions have been established under which a certain experimental fact may be considered to have been proven (constant reproducibility of the experiment, clear description of the experimental technique, the experimental accuracy, the equipment employed, etc.). In mathematics, where the axiomatic method of study is characteristic, the means of proof were sufficiently precisely established at an early stage of its development. In mathematics, a proof consists of a succession of derivations of assertions from previously derived assertions, and the means of such derivation can be exactly analyzed.

The origin of proof theory can be traced to Antiquity (the deductive method of reasoning in elementary geometry, Aristotelian syllogistics, etc.), but the modern stage in its development begins at the turn of the 19th century with the studies of G. Frege, B. Russell, A.N. Whitehead, E. Zermelo, and, in particular, D. Hilbert. At that time, G. Cantor's research in the theory of sets gave rise to antinomies (cf. Antinomy) which cast doubt on the validity of even the simplest considerations concerning arbitrary sets. L.E.J. Brouwer severely criticized certain classical methods of proof of the existence of objects in mathematics, and proposed a radical reconstruction of mathematics in the spirit of intuitionism. Problems concerning the foundations of mathematics became especially timely. Hilbert proposed to separate out the part of practical mathematics known as finitary mathematics, which is unobjectionable as regards both the appearance of antinomies and intuitionistic criticism. Finitary mathematics deals solely with constructive objects (cf. Constructive object) such as, say, the natural numbers, and with methods of reasoning that agree with the abstraction of potential realizability but are not concerned with the abstraction of actual infinity. In particular, the use of the law of the excluded middle is restricted. In finitary mathematics, no antinomies have been noted and there is no reason for expecting them to appear. Philosophically, the methods of reasoning of finitary mathematics reflect the constructive processes of real activity much more satisfactorily than those in general set-theoretic mathematics. It was the idea of Hilbert to use the solid ground of finitary mathematics as the foundation of all main branches of classical mathematics. He accordingly presented his formalization method, which is one of the basic methods in proof theory.

In general outline, the formalization method may be described as follows. One formulates a logico-mathematic language (an object language) $ L $ in terms of which the assertions of a given mathematical theory $ T $ are written as formulas. One then describes a certain class $ A $ of formulas of $ L $, known as the axioms of the theory, and describes the derivation rules (cf. Derivation rule) with the aid of which transitions may be made from given formulas to other formulas. The general term postulates applies to both axioms and derivation rules. The formal theory $ T ^ {*} $( a calculus according to a different terminology) is defined by a description of the postulates. Formulas which are obtainable from the axioms of the formal theory by its derivation rules are said to be deducible or provable in that theory. The deduction process itself may be formulated as a derivation tree. (See Derivation tree.) $ T ^ {*} $ is of special interest as regards the contents of the mathematical theory $ T $ if the axioms of $ T ^ {*} $ are records of true statements of $ T $ and if the derivation rules lead from true statements to true statements. In such a case $ T ^ {*} $ may be considered as a precision of a fragment of $ T $, while the concept of a derivation in $ T ^ {*} $ may be considered as a more precise form of the informal idea of a proof in $ T $, at least within the framework formalized by the calculus $ T ^ {*} $. Thus, in constructing the calculus $ T ^ {*} $, one must specify, in the first place, which postulates are to be considered suitable from the point of view of the theory $ T $. However, this does not mean that a developed semantics of $ T $ must be available at this stage; rather, it is permissible to employ practical habits, to include the most useful or the most theoretically interesting facts among the postulates, etc. The exact nature of the description of derivations in the calculus $ T ^ {*} $ makes it possible to apply mathematical methods in their study, and thus to give statements on the content and the properties of the theory $ T $.

Proof theory comprises standard methods of formalization of the content of mathematical theories. Axioms and derivation rules of the calculus are usually divided into logical and applied ones. Logical postulates serve to produce statements which are valid by virtue of their form itself, irrespective of the formalized theory. Such postulates define the logic of the formal theory and are formulated in the form of a propositional calculus or predicate calculus (see also Logical calculus; Mathematical logic; Intuitionism; Constructive logic; Strict implication calculus). Applied postulates serve to describe truths related to the special features of a given mathematical theory. Examples are: the axiom of choice in axiomatic set theory; the scheme of induction in elementary arithmetic (cf. Mathematical induction); and bar induction in intuitionistic analysis.

The Hilbert program for the foundations of mathematics may be described as follows. It may be hoped that any mathematical theory $ T $, no matter how involved or how abstract (e.g. the essential parts of set theory), may be formalized as a calculus $ T ^ {*} $ and that the formulation of the calculus itself requires finitary mathematics alone. Further, by analyzing the conclusions of $ T ^ {*} $ by purely finitary means, one tries to establish the consistency of $ T ^ {*} $ and, consequently, to establish the absence of antinomies in $ T $, at least in that part of it which is reflected in the postulates of $ T ^ {*} $. It immediately follows, as far as ordinary formalization methods are concerned, that certain very simple statements (in Hilbert's terminology — real statements) are deducible in $ T ^ {*} $ only if they are true in the finitary sense. It was initially hoped that practically all of classical mathematics could be described in a finitary way, after which its consistency could be demonstrated by finitary means. That this program could not be completed was shown in 1931 by K. Gödel, who proved that, on certain natural assumptions, it is not possible to demonstrate the consistency of $ T ^ {*} $ even by the powerful tools formalized in $ T ^ {*} $. Nevertheless, the study of various formal calculi remains a very important method in the foundations of mathematics. In the first place, it is of interest to construct calculi which reproduce important branches of modern mathematics, with consistency as the guideline, even if it is not yet possible to prove the consistency of such calculi in a manner acceptable to all mathematicians. An example of a calculus of this kind is the Zermelo–Fraenkel system of set theory, in which practically all results of modern set-theoretic mathematics can be deduced. Proofs of non-deducibility, in this theory, of several fundamental hypotheses obtained on the assumption that the theory is consistent (cf. Axiomatic set theory; Forcing method) indicate that these hypotheses are independent of the set-theoretic principles accepted in mathematics. This in turn may be regarded as a confirmation of the view according to which the existing concepts are insufficient to prove or disprove the hypotheses under consideration. It is in this sense that the independence of Cantor's continuum hypothesis has been established by P. Cohen.

Secondly, extensive study is made of the class of calculi whose consistency can be established by finitary means. Thus Gödel in 1932 proposed a translation converting formulas deducible by classical arithmetical calculus into formulas deducible by intuitionistic arithmetical calculus (i.e. an interpretation of the former calculus into the latter). If the latter is considered consistent (e.g. by virtue of its natural finitary interpretation), it follows that classical arithmetical calculus is self-consistent as well.

Finally, it may be promising to study more extensive methods than Hilbert's traditional finitism which are satisfactory from some other point of view. Thus, remaining in the framework of potential realizability, one may use so-called general inductive definitions. This makes it possible to use semi-formal theories in which some of the derivation rules have an infinite (but constructively generated) set of premises, and to transfer to finitary mathematics many semantic results. This procedure yielded the results obtained by P.S. Novikov (1943), who established the consistency of classical arithmetic using effective functionals of finite type; by C. Spector (1961), who proved the consistency of classical analysis by extending the natural intuitionistic methods of proof to include intuitionistic effective functionals of finite type; and by A.A. Markov (1971), on constructive semantics involving the use of general inductive definitions. In addition, many important problems on calculi may also be considered out of the context of the foundations of mathematics. These include problems of completeness and solvability of formal theories, the problem of independence of certain statements of a given formal theory, etc. It is then unnecessary to impose limitations of definite methods in the reasoning, and it is permissible to develop the theory of proofs as an ordinary mathematical theory, while using any mathematical means of proof that is convincing for the researcher.

A rigorously defined semantics for the formulas of the language under consideration, i.e. a strict definition of the meaning of the statements expressed in that language, serves as an instrument in the study of calculi, and sometimes even as a motivation for the introduction of new calculi. Thus, such a semantics is well-known in classical propositional calculus: Tautologies and only tautologies are deducible in such a calculus. In the general case, in order to prove that a certain formula $ A $ is not deducible in a calculus $ T ^ {*} $ under consideration, it is sufficient to construct a semantics for the formulas of the language of this theory such that all deducible formulas in $ T ^ {*} $ are true in this semantics, while $ A $ is false. A semantics can be classical, intuitionistic or of some other type, depending on the logical postulates it has to agree with. Non-classical semantics are successfully employed in the study of classical calculi — e.g. Cohen's forcing relationship can naturally be regarded as a modification of intuitionistic semantics. In another variant of Cohen's theory, multi-valued semantics are employed; these are models with truth values in a complete Boolean algebra. On the other hand, semantics of the type of Kripke models, defined by a classical set-theoretic method, make it possible to clarify many properties of modal and non-standard logics, including intuitionistic logic.

Proof theory makes extensive use of algebraic methods in the form of model theory. An algebraic system which brings each initial symbol of the language into correspondence with some algebraic objects forms a natural definition of some classical semantics of the language. An algebraic system is said to be a model of the formal theory $ T ^ {*} $ if all deducible formulas in $ T ^ {*} $ are true in the semantics generated by the algebraic system. Gödel showed in 1931 that any consistent calculus (with a classical logic) has a model. It was independently shown by A.I. Mal'tsev at a later date that if any finite fragment of a calculus has a model, then the calculus as a whole has a model (the so-called compactness theorem of first-order logic). These two theorems form the foundation of an entire trend in mathematical logic.

A survey of non-standard models of arithmetic established that the concept of the natural numbers is not axiomatizable in the framework of a first-order theory, and that the principle of mathematical induction is independent of the other axioms of arithmetical calculus. The relative nature of the concept of the cardinality of a set in classical mathematics was revealed during the study of countable models for formal theories, the interpretation of which is exclusively based on trivially uncountable models (the so-called Skolem's paradox). Many syntactic results were initially obtained from model-theoretic considerations. In terms of constructions of model theory it is possible to give simple criteria for many concepts of interest to proof theory. Thus, according to Scott's criterion, a class $ K $ of algebraic systems of a given language is axiomatizable if and only if it is closed with respect to ultra-products, isomorphisms and taking of elementary subsystems.

A formal theory is said to be decidable if there exists an algorithm which determines for an arbitrary formula $ A $ whether or not it is deducible in that theory. It is known that no formal theory containing a certain fragment of the theory of recursive functions (cf. Recursive function) is decidable. It follows that elementary arithmetic, the Zermelo–Fraenkel system and many other theories are undecidable as well. Proof theory disposes of powerful methods for the interpretation of theories in terms of other theories; such interpretations may also be used to establish the undecidability of several very simple calculi in which recursive calculi are not interpreted directly. Examples are elementary group theory, the theory of two equivalence relations, an elementary theory of fractional order, etc. On the other hand, examples are available of interesting decidable theories such as elementary geometry, the elementary theory of real numbers, and the theory of sets of natural numbers with a unique successor operation. The decidability of a theory is demonstrated by model-theoretic and syntactic methods. Syntactic methods often yield simpler decidability algorithms. For instance, the decidability of the elementary theory of $ p $- adic numbers was first established by model-theoretic methods. Subsequently a primitive-recursive algorithm was found for the recognition of decidability of this theory by a certain modification of the syntactic method of quantifier elimination. Estimates of the complexity of decidability algorithms of theories are of importance. As a rule, a primitive-recursive solution algorithm is available for decidable theories, and the problem is to find more exact complexity bounds. A promising direction in such studies is the decidability of real fragments of known formal theories. In this connection classical predicate calculus has been studied in much detail, where an effective description has been given of all decidable and undecidable classes of formulas, in terms of the position of quantifiers in the formula and the form of the predicate symbols appearing in the formula. A number of decidable fragments of arithmetical calculus and of elementary set theory have been described.

Methods for estimating the complexity of derivations have attracted the attention of researchers. This area of research comprises problems such as finding relatively short formulas that are derivable in a complex manner, or formulas yielding a large number of results in a relatively simple manner. Such formulas must be regarded as expressing the "depth" of the facts in the theory. Natural measures of the complexity of a proof are studied: the length of the proof; the time needed to find a solution; the complexity of the formulas used in the proof, etc. This is the domain of contact between proof theory and the methods of theoretical cybernetics.

References

[1] S.C. Kleene, "Mathematical logic" , Wiley (1967)
[2] A.A. Fraenkel, Y. Bar-Hillel, "Foundations of set theory" , North-Holland (1958)
[3] Yu.L. Ershov, I.A. Lavrov, A.D. Taimanov, M.A. Taitslin, "Elementary theories" Russian Math. Surveys , 20 : 4 (1965) pp. 35–105 Uspekhi Mat. Nauk. , 20 : 4 (1965) pp. 37–108

Comments

References

[a1] G. Takeuti, "Proof theory" , North-Holland (1975)
[a2] J.-Y. Girard, "Proof theory and logical complexity" , 1 , Bibliopolis (1987)
[a3] J.L. Bell, M. Machover, "A course in mathematical logic" , North-Holland (1977)
[a4] Yu.I. Manin, "A course in mathematical logic" , Springer (1977) (Translated from Russian)
[a5] J. Barwise (ed.) , Handbook of mathematical logic , North-Holland (1977) ((especially the article of D.A. Martin on Descriptive set theory))
How to Cite This Entry:
Proof theory. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Proof_theory&oldid=48332
This article was adapted from an original article by A.G. Dragalin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article