# Algebra of logic

The branch of mathematical logic that deals with propositions from the aspect of their logical meanings (true or false) and with logical operations on them.

The algebra of logic originated in the middle of the 19th century with the studies of G. Boole [1], [2], and was subsequently developed by C.S. Peirce, P.S. Poretskii, B. Russell, D. Hilbert, and others. The development of the algebra of logic was an attempt to solve traditional logical problems by algebraic methods. With the birth of the theory of sets (in the 1870s), propositions and logical operations on them became the principal subject of algebra of logic. Propositions are understood to mean statements about which it is meaningful to ask whether they are true or false. For instance, the proposition "a whale is an animal" is true, while the statement "all angles are right angles" is false. The connectives "and" , "or" , "if … then" , "is equivalent to" , the particle "not" , etc., which are commonly used in the language of logic, make it possible to construct new, more "complicated" , propositions from those already established. Thus, given that "x> 2" and "x≤ 3" , it is possible to obtain, by using the connective "and" , the proposition "x> 2 and x≤ 3" ; by using the connective "or" it is possible to obtain the proposition "x> 2 or x≤ 3" , etc.

The truth or the falsehood of the propositions obtained in this way depend on the truth or falsehood of the initial propositions and on a corresponding treatment of the connectives as operations on propositions. Often truth is symbolized by the digit "1" and falsehood by the digit "0" . The connectives "and" , "or" , "if … then" , and "is equivalent to" are denoted, respectively, by the symbols & (conjunction), $\lor$( disjunction), $\rightarrow$( implication), and $\sim$( equivalence); negation is represented by a bar $\ \overline{ {}}\; \$ above the symbol. In addition to individual propositions, examples of which are given above, one also uses variable propositions, i.e. variables whose values may be any individual statements given in advance. As the next step, the concept of a formula was inductively introduced. It formalizes the concept of a compound proposition. Let $A,\ B,\ C \dots$ denote individual propositions, and let $x,\ y,\ z \dots$ denote variable propositions. Each one of the above letters is known as a formula. Let the symbol * denote any one of the connectives listed above, and let $\mathfrak A$ and $\mathfrak B$ denote formulas; then $( \mathfrak A * \mathfrak B )$ and $\overline{\mathfrak A}\;$ will be formulas (e.g. $(( x \& y ) \rightarrow \overline{z}\; )$). The connectives and the particle "not" are looked upon as operations on quantities which assume the values 0 and 1, the results of these operations are also the digits 0 and 1. The conjunction $x \& y$ is equal to 1 if and only if both $x$ and $y$ equal 1; the disjunction $x \lor y$ equals 0 if and only if $x$ and $y$ are both equal to 0; the implication $x \rightarrow y$ is equal to 0 if and only if $x = 1$ and $y = 0$; the equivalence $x \sim y$ equals 1 if and only if the values of $x$ and $y$ are identical; and the negation $\overline{x}\;$ is equal to 1 if and only if $x = 0$. If the values of the propositions in each formula are given, the value 0 or 1 can be assigned to the formula. This means that any formula can be considered as a way of stating or as a way of realizing a function of the algebra of logic, i.e. a function which is defined on samples of zeros and ones, and with as values also 0 or 1. Two formulas $\mathfrak A$ and $\mathfrak B$ are said to be equal $( \mathfrak A = \mathfrak B )$ if they realize equal functions. The subject matter of the algebra of logic is the treatment of functions of the algebra of logic and the operations on these functions. In what follows below, the class of functions of the algebra of logic will be extended to the class of functions whose arguments, as well as the functions themselves, assume the values of elements of a finite fixed set $E$; the number of operations on the functions will also be extended. The algebra of logic is sometimes understood to mean this latter concept. However, from the aspect of practical applications, the most important case is that of sets $E$ of cardinality two, and this case will therefore be discussed in greatest detail. The results presented here are also closely connected with a second approach to the study of propositions — the so-called propositional calculus.

In specifying the functions of the algebra of logic one often uses tables which contain all the combinations of the values of the variables and the values of the functions on these combinations. Thus, the synoptic table of the functions $\overline{x}\;$, $x \& y$, $x \lor y$, $x \rightarrow y$, and $x \sim y$ is given below:

<tbody> </tbody>
 $x$ $y$ $\overline{x}\;$ $x \& y$ $x \lor y$ $x \rightarrow y$ $x \sim y$ 0 0 1 0 0 1 1 0 1 1 0 1 1 0 1 0 0 0 1 0 0 1 1 0 1 1 1 1

Tables for arbitrary functions of the algebra of logic are constructed in a similar manner. This is the so-called tabular way of specifying functions of the algebra of logic. The tables themselves are sometimes called truth tables. The following equalities play an important part in the transformations of formulas into equivalent formulas: $$\tag{1} x \& y = y \& x , x \lor y = y \lor x$$( the law of commutativity); $$\tag{2} ( x \& y ) \& z = x \& ( y \& z ), ( x \lor y ) \lor z = x \lor ( y \lor z )$$( the law of associativity); $$\tag{3} x \& ( x \lor y ) = x , x \lor ( x \& y ) = x$$( the law of absorption); $$\tag{4} x \& ( y \lor z ) = ( x \& y ) \lor ( x \& z ) ,$$ $$x \lor ( y \& z ) = ( x \lor y ) \& ( x \lor z )$$( the laws of distributivity); $$\tag{5} x \& \overline{x}\; = 0$$( the law of contradiction); $$\tag{6} x \lor \overline{x}\; = 1$$( the law of the excluded middle); $$\tag{7} x \rightarrow y = \overline{x}\; \lor y ,$$ $$x \sim y = ( x \& y ) \lor ( \overline{x}\; \& \overline{y}\; ) .$$ If these equalities are used, it is possible to obtain new equalities, even without the use of tables. These new equalities are obtained by the so-called identity transformations which, generally speaking, alter the expression, but not the function realized by the expression. For instance, the absorption laws yield the law of idempotency $x \lor x = x$. The above equalities frequently simplify the notation of formulas by eliminating some of the parentheses. Thus, the relations (1) and (2) make it possible to replace the formulas $( ( \dots ( \mathfrak A _{1} \& \mathfrak A _{2} ) \& \dots ) \& \mathfrak A _{s} )$ and $(( \dots ( \mathfrak B _{1} \lor \mathfrak B _{2} ) \lor \dots ) \lor \mathfrak B _{s} )$ by the more compact notation $\mathfrak A _{1} \& \mathfrak A _{2} \& \dots \& \mathfrak A _{s}$ and $\mathfrak B _{1} \lor \mathfrak B _{2} \lor \dots \lor \mathfrak B _{s}$. The former relation is known as the conjunction of the factors $\mathfrak A _{1} \dots \mathfrak A _{s}$, while the latter is called the disjunction of the terms $\mathfrak B _{1} \dots \mathfrak B _{s}$. It is also seen from the equalities (5), (6) and (7) that, if the constants 0 and 1, the implication and the equivalence, are regarded as functions, they may be expressed by conjunctions, disjunctions and negations. Thus, any function of the algebra of logic may be realized by a formula written using only the symbols &, $\lor$ and $\ \overline{ {}}\; \$.

The set of all formulas involving variable propositions, some of the symbols &, $\lor$, $\rightarrow$, $\sim$, $\ \overline{ {}}\; \$, and the constants 0 and 1 is called a language over these symbols and constants. Equations (1)–(7) show that, for any formula in the language over &, $\lor$, $\rightarrow$, $\sim$, $\ \overline{ {}}\; \$, 0, and 1, it is possible to find an equivalent formula in the language over &, $\lor$, $\ \overline{ {}}\; \$, 0, and 1; for example $$( x \rightarrow y ) \sim z = ( ( \overline{x}\; \lor y ) \& z ) \lor ( {( \overline{x}\; \lor y )} bar \& \overline{z}\; ).$$ In the latter language a special role is played by a class of formulas which may be written in the form $\mathfrak A _{1} \lor {} \dots \lor \mathfrak A _{s}$, 0 or 1, where $s \geq 1$ and where each $\mathfrak A _{i}$ is either a variable proposition, its negation or a conjunction of them, so that none of the $\mathfrak A _{i}$ contains equal factors or factors of the form $x$ and $\overline{x}\;$ together, and all $\mathfrak A _{i}$ are pairwise unequal. Here the parentheses are omitted, since it is assumed that the conjunction operation is "stronger" than disjunction, i.e. when calculating with respect to the given values of the variables, the values of $\mathfrak A _{1} \dots \mathfrak A _{s}$ are calculated first. These expressions are called disjunctive normal forms. Each formula $u$ in a language over &, $\lor$, $\rightarrow$, $\sim$, $\ \overline{ {}}\; \$, 0, 1 which realizes a non-zero function of the algebra of logic, may be converted with the aid of the equalities (1)–(7) into an equivalent disjunctive normal form, containing all variable formulas of $u$ and any number of other variables, and each $\mathfrak A _{i}$ in this disjunctive normal form containing the same variables. Such a disjunctive normal form is said to be a perfect disjunctive normal form of the formula $u$; the perfect disjunctive normal form of 0 is the formula 0 itself. The possibility of reduction to a perfect disjunctive normal form forms the basis of an algorithm determining the equality or inequality of two given formulas. The procedure of this algorithm is as follows: The given formulas $u _{1}$ and $u _{2}$ are reduced to perfect disjunctive normal forms which contain all the variables comprised in both $u _{1}$ and $u _{2}$ and the two expressions are compared; if they are identical, $u _{1} = u _{2}$; if not, $u _{1} \neq u _{2}$. An important role in the algebra of logic and its applications is played by contracted disjunctive normal forms, i.e. disjunctive normal forms which satisfy the following conditions: 1) it does not contain pairs of elements $\mathfrak A _{i}$ and $\mathfrak A _{j}$ such that any factor of $\mathfrak A _{i}$ is contained in $\mathfrak A _{j}$; 2) for any two elements $\mathfrak A _{i}$ and $\mathfrak A _{j}$ one of which contains some variable as factor, while the other contains the negation of this variable (under the condition that the pair of elements does not contain a second variable for which the same is true), there exists (in the same disjunctive normal form) an element $\mathfrak A _{k}$ which is equal to the conjunction of the other factors of these two elements. Any disjunctive normal form may be reduced with the aid of the equalities (1)–(7) to an equal contracted disjunctive normal form. Thus, the contracted disjunctive normal form for the formula $(( x \sim (y \rightarrow z )) \rightarrow (x \& z ))$ is $\overline{x}\; \& \overline{y}\; \lor z \lor x \& y$. The formulas $u _{1}$ and $u _{2}$ are equal if and only if their contracted disjunctive normal forms are identical. In addition to the disjunctive normal form, use is also made of the conjunctive normal form, i.e. of expressions which are obtained from a disjunctive normal form by substituting the symbol $\lor$ for &, & for $\lor$, and 0 for 1. For instance, the disjunctive normal form $x \& y \lor \overline{x}\; \& z$ yields the conjunctive normal form $(x \lor y) \& ( \overline{x}\; \lor z)$. An operation (or function) $f$ is said to be the dual of an operation $\psi$ if the table defining $f$ is obtained from the table which defines $\psi$ by substituting 1 for 0 and 0 for 1 throughout (including interchange in the values of the functions). Thus, the conjunction and the disjunction are mutually dual, the negation is dual to itself, the constants 0 and 1 are mutually dual, etc. A transformation of a formula which consists in replacing the symbols of all operations and expressions by the symbols of their dual operations, and in exchanging 0 for 1 and 1 for 0, is called a duality transformation. If the equality $\mathfrak A = \mathfrak B$ is true, and if $\mathfrak A ^{*}$ is the dual of $\mathfrak A$ while $\mathfrak B ^{*}$ is the dual of $\mathfrak B$, then it is also true that $\mathfrak A ^{*} = \mathfrak B ^{*}$, and this equality is called the dual of the above equality; this is the so-called duality principle. The pairs of the laws (1), (2) and (3) are examples of dual equalities; equality (5) is dual to equality (6), and each conjunctive normal form is the dual of some disjunctive normal form. A perfect conjunctive normal form and a contracted conjunctive normal form are defined as conjunctive normal forms such that their dual expressions are a perfect disjunctive normal form and a contracted disjunctive normal form, respectively. Perfect and contracted disjunctive and conjunctive normal forms are used for surveying all hypotheses and all consequences of a given formula. A hypothesis of a formula $\mathfrak A$ is understood to be a formula $\mathfrak B$ such that $( \mathfrak B \rightarrow \mathfrak A ) = 1$; a consequence of a formula $\mathfrak A$ is a formula $\mathfrak B$ such that $( \mathfrak A \rightarrow \mathfrak B ) = 1$. A hypothesis of a formula $\mathfrak A$ is said to be simple if it is a conjunction of variable formulas or their negations and if, after discarding any one of its factors, it is no longer a hypothesis of the formula $\mathfrak A$. Similarly, a consequence of $\mathfrak A$ is called simple if it is a disjunction of variable formulas or their negations and if, after one of its elements is discarded, it is no longer a consequence of $\mathfrak A$. The survey of hypotheses and consequences is based on the indication of an algorithm which transforms a given formula into all of its simple hypotheses and consequences and, with the aid of the laws (2)–(7), into all remaining hypotheses and consequences. The algorithm is based on the following facts. If $\mathfrak A = \mathfrak B$, then $\mathfrak A$ and $\mathfrak B$ have identical hypotheses and identical consequences. An element of a disjunctive normal form is a hypothesis of that disjunctive normal form, while a factor of a conjunctive normal form is a consequence of that conjunctive normal form. If $\mathfrak A$ is a hypothesis of a proposition $\mathfrak B$, then $\mathfrak A \& \mathfrak C$ is also a hypothesis for $\mathfrak B$; if $\mathfrak A$ is a consequence of $\mathfrak B$, then $\mathfrak A \lor \mathfrak C$ is also a consequence of $\mathfrak B$. If $\mathfrak A$ and $\mathfrak C$ are hypotheses of a proposition $\mathfrak B$, then $\mathfrak A \lor \mathfrak C$ is also a hypothesis for $\mathfrak B$; if $\mathfrak A$ and $\mathfrak C$ are consequences of $\mathfrak B$, then $\mathfrak A \& \mathfrak C$ is also a consequence of $\mathfrak B$. A perfect disjunctive normal form has no hypotheses (not containing letters which are not contained in that disjunctive normal form) other than the disjunctions of some of its elements or of disjunctive normal forms equal to them. A perfect conjunctive normal form has no consequences other than the conjunctions of some of its factors or of propositions equal to them. A contracted disjunctive normal form is the disjunction of all its simple hypotheses; a contracted conjunctive normal form is the conjunction of all its simple consequences. A contracted disjunctive normal form has important applications. The first problem which should be mentioned is the minimization of functions of the algebra of logic, which is a part of the problem of synthesis of control (switching) systems. The minimization of functions of the algebra of logic consists in constructing a disjunctive normal form for the given function of the algebra of logic which realizes this function and has the smallest sum of the number of factors in its elements, i.e. has minimal complexity. Such disjunctive normal forms are called minimal. Each minimal disjunctive normal form for a given function of the algebra of logic which is not a constant is obtained from the contracted disjunctive normal form of this function by discarding a number of elements. For certain functions the contracted disjunctive normal form may coincide with the minimal disjunctive normal form. Such a case occurs, for example, with monotone functions, i.e. functions realizable by formulas over &, $\lor$, 0, 1.

In the language over $\& , \lor , \rightarrow , \sim, 0, 1, +$, when the sign $+$ is interpreted as addition modulo $2$, the following relations are valid: $$\tag{8} x \lor y = ( ( x \& y ) + x ) + y ,$$ $$\tag{9} x \rightarrow y = \overline{x}\; \& y , x \sim y = ( x + y ) + 1 ,$$ $$\tag{10} x + y = ( x \& y ) \lor ( \overline{x}\; \& \overline{y}\; ) , 1 = x \lor \overline{x}\; .$$ These formulas make it possible to translate formulas from the language over $\& , \lor , \rightarrow , \ \overline{ {}}\; \ , 0 , 1$ into equivalent formulas in the language over $\& , + , 1$, and vice versa. The identity transformations in the latter language are realized with the aid of equalities established for the conjunction and the following additional equalities: $$\tag{11} x + y = y + x ,$$ $$\tag{12} ( x + y ) + z = x + ( y + z ),$$ $$\tag{13} x \& ( y + z ) = x \& y + x \& z ,$$ $$\tag{14} x \& y = x , x + ( y + y ) = x , x \& 1 = x.$$ Here it is considered, as before, that conjunction is a stronger connective than the sign $+$. These equalities are sufficient to deduce any valid equality in the language over $\& , + , 1$, with the aid of identity transformations, just as in the case of the language over $\& , \lor, \rightarrow, \sim, \ \overline{ {}}\; \ , + , 1$. A proposition in this language is called a reduced polynomial if it is of the form $\mathfrak A _{1} + \dots + \mathfrak A _{s}$, when $\mathfrak A _{i}$ is equal to 1 or is a variable or a conjunction of various variables without negations, $\mathfrak A _{i} \neq \mathfrak A _{j}$ if $i \neq j$, $s \geq 1$, or else is equal to $1 + 1$. Thus, the expression $x \& y \& z + x \& y + 1$ is a reduced polynomial. Any formula of the algebra of logic may be converted to a reduced polynomial by identity transformations. The equality $\mathfrak A = \mathfrak B$ is valid if and only if the reduced polynomial for $\mathfrak A$ is identical with the reduced polynomial for $\mathfrak B$. In addition to the languages mentioned above there are also other languages equivalent to them. Two languages are called equivalent if each formula in one language can be converted, by means of certain conversion rules, into another equivalent formula in the second language, and vice versa. It is sufficient to base such a language on an arbitrary system of operations (and constants) such that the operations (and constants) of the system may be used to represent any function of the algebra of logic. Such systems are said to be functionally complete. Examples of complete systems are $\{ \overline{ {x \lor y}}\; \}$, $\{ x \lor y ,\ \overline{x}\; \}$, $\{ x + y,\ 1,\ x \& y \}$, etc. There exists an algorithm which can be used to establish the completeness or incompleteness of an arbitrary finite system of functions of the algebra of logic. It is based on the following fact. A system of functions of the algebra of logic is complete if and only if it contains functions $f _{1} (x,\ y \dots v)$ and $f _{2} (x,\ y \dots v)$ such that $f _{1} (0,\ 0 \dots 0) = 1$ and $f _{2} (1,\ 1 \dots 1) = 0$, as well as functions $f _{3} ,\ f _{4}$ and $f _{5}$, where $f _{3} \neq f _{3} ^{*}$, $f _{4}$ is not monotone, and the reduced polynomial of $f _{5}$ contains an element $\mathfrak A$ with more than one factor. There are also other languages based on systems of operations which are not functionally complete, and the number of such languages is infinite. They contain infinitely many pairwise incomparable languages (in the sense that there is no way of translating from one language to another by means of identity transformations). However, for any language based on some operations of the algebra of logic there exists a finite system of equations in this language such that any equality can be deduced with the aid of identity transformations of the equations of this system. Such a system called a deductively complete system of equalities in this language. Thus, the equalities (1)–(6) constitute a complete system of equalities of the language over &, $\lor$, $\ \overline{ {}}\; \$, 0, 1.

In considering one of the languages discussed above in conjunction with some complete system of equalities in this language, a tabular statement of the basic operations in the language and the requirement that propositions be the values of its variables are sometimes discarded. Instead, various interpretations of the language are permitted. These interpretations involve some set of objects (which are used as values of the variables) and some system of operations over the objects of this set, which satisfy the equalities from a complete system of equalities of the language. Thus, such a procedure results in the transformation of the language over &, $\lor$, $\ \overline{ {}}\; \$, 0, 1 into the language of a Boolean algebra; the language over $\& , + , 1$ is transformed into the language of a Boolean ring (with a unit element); the language over $\& , \lor , \ \overline{ {}}\; \$ is transformed into the language of a distributive lattice, etc.

Historically, the development of the algebra of logic has been stimulated mainly by the problems to which it has been applied. An important application is the theory of electric circuits. In certain cases circuits cannot be described in terms of the ordinary two-valued algebra of logic, and a many-valued logic must be considered (cf. Many-valued logic).

#### References

 [1] G. Boole, "The mathematical analysis of logic: being an essay towards a calculus of deductive reasoning" , Macmillan (1847) MR2015605 MR0028250 Zbl 1200.03003 Zbl 1020.03001 [2] G. Boole, "An investigation of the laws of thought, on which are founded the mathematical theories of logic and probabilities" , Dover, reprint (1951) MR0085180 Zbl 1205.03003 [3] P.S. Poretskii, "On methods of solving logical equalities and on an inverse method of mathematical logic" , Collection of protocols of sessions of the Fiz-Mat. section of the Society of Natural Scientists at Kazan. Univ. , 2 , Kazan' (1884) pp. 161–330 (In Russian) [4] D. Hilbert, W. Ackerman, "Grundzüge der theoretischen Logik" , Dover, reprint (1946) MR0015342 Zbl 0239.02001 Zbl 0158.00602 Zbl 0088.00901 Zbl 0034.29002 Zbl 0018.19301 Zbl 64.0026.05 Zbl 54.0055.01 [5] P.S. Novikov, "Elements of mathematical logic" , Oliver & Boyd and Acad. Press (1964) (Translated from Russian) MR0164868 Zbl 0113.00301 [6] S.V. Yablonskii, G.P. Gavrilov, V.B. Kudryavtsev, "Functions of the algebra of logic and Post classes" , Moscow (1964) (In Russian)