# Algebraic equation

An equation of the type $f _ {n} = 0$, where $f _ {n}$ is a polynomial of degree $n$ in one or more variables ( $n \geq 0$). An algebraic equation in one variable is an equation of the form

$$\tag{1 } a _ {0} x ^ {n} + a _ {1} x ^ {n - 1 } + \dots + a _ {n} = 0 .$$

Here $n$ is a non-negative integer, $a _ {0} \dots a _ {n}$ are given, the so-called coefficients of the equation, while $x$ is an unknown which has to be found. It is assumed that the coefficients of the algebraic equation (1) are not all equal to zero. If $a _ {0} \neq 0$, then $n$ is called the degree of the equation.

The values of the unknown $x$ which satisfy equation (1), i.e. the values which, if substituted for $x$, will convert this equation into an identity, are known as the roots of the equation (1), or as the roots of the polynomial

$$\tag{2 } f _ {n} (x) = a _ {0} x ^ {n} + a _ {1} x ^ {n - 1 } + \dots + a _ {n} .$$

The roots of a polynomial are related to its coefficients by Viète's formulas (cf. Viète theorem). To solve an equation means to find all its roots contained in the range of values of the unknown(s) under consideration.

As far as applications are concerned, the most important case is that of coefficients and roots of an equation that are numbers of a certain kind (e.g. rational, real or complex). The case of the coefficients and roots being elements of an arbitrary field may also be considered.

If a given number (or element of a field) $c$ is a root of the polynomial $f _ {n} (x)$ then, in accordance with the Bezout theorem, $f _ {n} (x)$ is divisible by $x - c$ without remainder. The division may be performed according to the Horner scheme.

A number (or element of a field) $c$ is called a root of multiplicity $k$ of a polynomial $f(x)$( where $k$ is a non-negative integer) if $f(x)$ is divisible by ${(x - c) } ^ {k}$, but is not divisible by ${(x - c) } ^ {k+1}$. Roots of multiplicity one are called simple roots of the polynomial, other roots are called multiple roots.

Each polynomial $f(x)$ of degree $n > 0$ with coefficients in a field $P$ has at most $n$ roots in this field $P$, each root being counted the number of times equal to its multiplicity (consequently, there are not more than $n$ different roots).

In an algebraically closed field any polynomial of degree $n$ has exactly $n$ roots (counted according to their multiplicity). In particular, this statement also applies to the field of complex numbers.

Equation (1) of degree $n$ with coefficients from a field $P$ is called irreducible over $P$ if the polynomial (2) is irreducible over this field, i.e. cannot be represented as the product of other polynomials of degrees lower than $n$ over $P$. Otherwise, both the polynomial and the corresponding equation are called reducible. Polynomials of degree zero and zero itself are not considered to be reducible or irreducible. Whether a given polynomial is reducible or irreducible over a field $P$ depends on the field in question. Thus, the polynomial $x ^ {2} - 2$ is irreducible over the field of rational numbers, since it has no rational roots, but is reducible over the field of real numbers: $x ^ {2} - 2 = (x + \sqrt 2 )(x - \sqrt 2 )$. Similarly, the polynomial $x ^ {2} + 1$ is irreducible over the field of real numbers, but is reducible over the field of complex numbers. Only polynomials of the first degree are irreducible over the field of complex numbers, and any polynomial can be decomposed into linear factors. Only polynomials of the first degree and polynomials of the second degree without real roots are irreducible over the field of real numbers (and all polynomials can be decomposed into products of linear and irreducible quadratic polynomials). Irreducible polynomials of all degrees exist over the field of rational numbers; examples are the polynomials of the form $x ^ {n} + 2$. The irreducibility of a polynomial over the field of rational numbers can often be established by Eisenstein's criterion: If, for a polynomial (2) of degree $n > 0$ with integral coefficients, there exists a prime number $p$ such that the leading coefficient $a _ {0}$ is not divisible by $p$, all the remaining coefficients are divisible by $p$, and the constant term $a _ {n}$ is not divisible by $p ^ {2}$, then this polynomial is irreducible over the field of rational numbers.

Let $P$ be an arbitrary field. For each polynomial $f(x)$ of degree $n > 1$ that is irreducible over $P$, there exists an extension of $P$ containing at least one root of $f(x)$( cf. Extension of a field); moreover, there exists a splitting field of $f(x)$, i.e. a minimal extension of $P$ in which this polynomial can be decomposed into linear factors. Every field has an algebraically closed extension.

Any algebraic equation of degree not exceeding four is solvable by radicals. Solutions of problems involving special forms of equations of the second and third degrees were already known to ancient Babylonians (2000 B.C.) (cf. Quadratic equation; Cubic equation). The theory of the solution of quadratic equations was first expounded in the book Arithmetica by Diophantus (3rd century A.D.). In the 16th century, Italian mathematicians obtained the solution by radicals of equations of the third and fourth degrees with coefficients that are letters (cf. Cardano formula; Ferrari method). During the 300 years which followed, fruitless efforts were made to find solutions by radicals of equations of the fifth and higher degrees with coefficients that are letters. It was finally demonstrated by N.H. Abel in 1826 that such solutions do not exist.

Abel's theorem in modern formulation: Let (1) be an equation of degree $n > 4$ with coefficients $a _ {0} \dots a _ {n}$; let $K$ be an arbitrary field and let $P$ be the field of rational functions in $a _ {0} \dots a _ {n}$ with coefficients from $K$; then the roots of equation (1) (lying in some extension of $P$) cannot be expressed in terms of the coefficients of this equation by a finite number of operations of addition, subtraction, multiplication and division (operations which are meaningful in $P$) and root extraction (which is meaningful in the extension of $P$). In other words, the general equation of degree $n > 4$ is unsolvable by radicals ().

However, Abel's theorem does not contradict the fact that some algebraic equations with numerical coefficients (or with coefficients from a given field) are solvable by radicals. Some special equations of degree $n$ are solvable by radicals (e.g. a two-term equation). A complete solution of the problem when an algebraic equation is solvable by radicals was given by E. Galois about the year 1830.

The fundamental theorem on the solvability of algebraic equations by radicals in Galois theory can be stated as follows. Let $f(x)$ be a polynomial with coefficients from a field $K$ that is irreducible over $K$. Then, 1) if at least one root of the equation $f(x) = 0$ can be expressed by radicals in terms of the coefficients of this equation and if the exponents of the radicals are not divisible by the characteristic of $K$, then the Galois group of this equation is solvable over $K$; 2) conversely, if the Galois group of the equation $f(x) = 0$ is solvable over the field and the characteristic of $K$ is zero or is higher than all orders of the constituent factors of this group, then all the roots of the equation can be represented by radicals in terms of its coefficients, all exponents of the roots $a ^ {1/n}$ involved can be taken to be prime numbers, and the binomial equations $x ^ {n} - a = 0$ which correspond to these roots can be taken to be irreducible over the fields to which these radicals are adjoined.

This theorem was proved by Galois for the case in which $K$ is the field of rational numbers; in this case all conditions involving the characteristic of the field $K$ in the above formulation become superfluous.

Abel's theorem is a consequence of Galois' theorem, since the Galois group of equations of order $n$ with coefficients (that are letters) over the field $P$ of rational functions in the coefficients of the equation with coefficients from an arbitrary field $K$, is the symmetric group $S _ {n}$, which is unsolvable for $n > 4$. For any $n > 4$ there exist equations of degree $n$ with rational (and integral) coefficients that are unsolvable by radicals. An example of such an equation for $n = 5$ is the equation $x ^ {5} - p ^ {2} x - p = 0$, where $p$ is a prime number. In Galois theory an algebraic equation is solved by reducing it to a chain of simpler equations, which are called resolvents (cf. Resolvent) of the original equation.

The solvability of equations by radicals is closely connected with problems involving geometrical constructions with ruler and compasses; in particular, with the problem of the division of the circle into $n$ equal parts (cf. Cyclotomic polynomials; Primitive root).

## Algebraic equations in one unknown with numerical coefficients.

Methods of approximate calculations (e.g. the parabola method) are generally used to find the roots of algebraic equations of degree higher than two with coefficients from the field of real or complex numbers. It is convenient to begin by getting rid of the multiple roots. A number $c$ is a root of multiplicity $k$ of a polynomial $f(x)$ if and only if the polynomial and its derivatives up to the order $k - 1$, inclusive, vanish if $x = c$, and if $f ^ { (k) } (c) \neq 0$. If $f(x)$ is divided by the greatest common divisor $d(x)$ of this polynomial and its derivative, then the resulting polynomial has the same roots as $f(x)$, but only of multiplicity one. It is further possible to construct the polynomials whose simple roots are all the roots of the polynomial $f(x)$ of equal (given) multiplicity. A polynomial has multiple roots if and only if its discriminant vanishes.

The determination of the number of roots and bounds on their size are frequently occurring problems. The number

$$1 + \frac{\max _ {i > 0 } | a _ {i} | }{| a _ {0} | }$$

can be taken as an upper bound for the modulus of each root (both real and complex) of the algebraic equation (1) with arbitrary complex coefficients. The Newton method usually yields a more exact bound if the coefficients are real. The determination of a lower bound for the positive roots, and of upper and lower bounds for the negative roots, are reduced to the determination of an upper bound for the positive roots.

The simplest method to determine the number of real roots is to use the Descartes theorem. If it is known that all the roots of a given polynomial are real (for example, for the characteristic polynomial of a real symmetric matrix), then Descartes' theorem yields the exact number of roots. By considering the polynomial $f (-x)$, the number of negative roots of $f(x)$ can be found using the same theorem. The exact number of real roots located in a given interval (in particular, the number of all real roots) of a polynomial with real coefficients without multiple roots can be found by the Sturm theorem. Descartes' theorem is a special case of the Budan–Fourier theorem, which yields an estimate from above for the number of real roots of a polynomial with real coefficients lying in a certain fixed interval.

It is sometimes desirable to find roots of a special type. For instance, Hurwitz' criterion is a necessary and sufficient condition for all the roots of an equation (with complex coefficients) to have negative real parts (cf. Routh–Hurwitz criterion).

There exists a method for calculating all rational roots of a polynomial with rational coefficients. A polynomial $f (x)$ with rational coefficients has the same roots as the polynomial $g(x)$ with integral coefficients obtained from $f(x)$ by multiplication by a common multiple of all denominators of the coefficients of $f(x)$. The only rational roots of a polynomial $g (x) = b _ {0} x ^ {n} + b _ {1} x ^ {n-1} + \dots + b _ {n}$, $b _ {n} \neq 0$, with integral coefficients are found among the irreducible fractions $p/q$ in which $p$ is a divisor of the number $b _ {n}$ and $q$ is a divisor of the number $b _ {0}$( and only those fractions among them such that, for any integer $m$, the number $g(m)$ is divisible by $p - mq$). If $b _ {0} = 1$, then all rational roots of $g(x)$( if any) are integers, divisors of the constant term, and can be found by trial and error.

## Systems of algebraic equations.

Concerning systems of algebraic equations of the first degree, see Linear equation.

A system of any two algebraic equations of any degree in two unknowns $x$ and $y$ may be written as:

$$\tag{3 } \left . \begin{array}{l} f ( x , y ) = a _ {0} ( x ) y ^ {n} + a _ {1} ( x ) y ^ {n - 1 } + \dots + a _ {n} ( x ) = 0 , \\ g ( x , y ) = b _ {0} ( x ) y ^ {s} + b _ {1} ( x ) y ^ {s - 1 } + \dots + b _ {s} ( x ) = 0 , \\ \end{array} \right \}$$

where $a _ {i} (x), b _ {j} (x)$ are polynomials in one unknown $x$. If a certain numerical value is assigned to $x$, a system of two equations in one unknown $y$ with constant coefficients $a _ {i} , b _ {j}$ is obtained. The resultant of this system is the following determinant:

$$R (f, g) = \ \left | \begin{array}{cccccccc} a _ {0} &a _ {1} &\dots &a _ {n} & 0 & 0 &\dots & 0 \\ 0 &a _ {0} &\dots &\dots &a _ {n} &\dots &\dots &\dots \\ \dots &\dots &\dots &\dots &\dots & 0 &\dots & 0 \\ 0 & 0 &\dots &\dots & 0 &a _ {0} &\dots &a _ {n} \\ b _ {0} &b _ {1} &\dots &b _ {s} & 0 & 0 &\dots & 0 \\ 0 &b _ {0} &\dots &\dots &b _ {s} &\dots &\dots &\dots \\ \dots &\dots &\dots &\dots &\dots & 0 &\dots & 0 \\ 0 & 0 &\dots &\dots & 0 &b _ {0} &\dots &b _ {s} \\ \end{array} \ \right | .$$

(There are $s$ rows of $a$' s and $n$ rows of $b$' s in this determinant.) The following statement is true: A number $x _ {0}$ is a root of the resultant $R (f, g)$ if and only if the polynomials $f (x _ {0} , y )$ and $g (x _ {0} , y )$ have a common root $y _ {0}$ or if the two leading coefficients $a _ {0} ( x _ {0} )$ and $b _ {0} ( x _ {0} )$ are equal to zero.

Thus, in order to solve the system (3) one must find all roots of the resultant $R (f, g)$, substitute each one of these roots into the system (3) and find the common roots of these two equations in one unknown $y$. One must also find the common roots of the two polynomials $a _ {0} (x)$ and $b _ {0} (x)$, substitute them into (3), and verify if the resulting equations in one unknown $y$ have common roots. In other words, the solution of a system of two algebraic equations in two unknowns is reduced to the solution of one equation in one unknown and the calculation of the common roots of two equations in one unknown (the common roots of two or more polynomials in one unknown are the roots of their largest common divisor).

Systems of any number of algebraic equations with any number of unknowns are solved in a similar manner. This task involves complicated calculations. It is connected with the so-called elimination theory.

How to Cite This Entry:
Algebraic equation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Algebraic_equation&oldid=45061
This article was adapted from an original article by I.V. Proskuryakov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article