From Encyclopedia of Mathematics
Jump to: navigation, search

The difference $x-a$, where $a$ is a given number that is considered as the approximate value of a certain quantity with exact value $x$. The difference $|x-a|$ is also called the absolute error. The ratio of $|x-a|$ to $|a|$ is called the relative error of $a$. To characterize the error one usually states bounds on it. A number $\Delta(a)$ such that

$$ |x-a| \leq \Delta(a), $$ is called a bound on the absolute error. A number $\delta(a)$ such that

$$ \left|\frac{x-a}{a}\right| \leq \delta(a) $$ is called a bound on the relative error. A bound on the relative error is frequently expressed in percentages. The smallest possible numbers are taken as $\Delta(a)$ and $\delta(a)$.

The information that the number $a$ is the approximate value of $x$ with a bound $\Delta(a)$ of the absolute error is usually stated as:

$$ x = a \pm \Delta(a). $$ The analogous relation for the relative error is written as

$$ x = a(1\pm \delta(a)). $$ The bounds on the absolute and relative errors indicate the maximum possible discrepancy between $x$ and $a$. At the same time, one often uses error characteristics that incorporate the character of the error (for example, the error of a measurement) and the frequencies of different values of the difference between $x$ and $a$. Probabilistic methods are used in the last approach (see Errors, theory of).

In the numerical solution of a problem, the error in the result is due to inaccuracies occurring in the formulation and in the methods of solution. The error arising because of inaccuracy in the mathematical description of a real process is called the error of the model; that arising from inaccuracy in specifying the initial data is called the input-data error; that due to inaccuracy in the method of solution is called the methodological error; and that due to inaccuracy in the computations is called the computational error (rounding error). Sometimes the error of the model and the input-data error are combined under the name inherent error.

During the calculations, the initial errors are sequentially transferred from operation to operation, and they accumulate and generate new errors. Error occurrence and propagation in calculations is the subject of special researches (see Computational mathematics).


[1] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
[2] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[3] V.V. Voevodin, "Numerical methods of algebra" , Moscow (1966) (In Russian)
[4] J.H. Wilkinson, "Rounding errors in algebraic processes" , Prentice-Hall (1963)


Apart from errors caused by the finite arithmetic, also discretization errors (when replacing a differential equation by a difference equation) or truncation errors (by making an infinite process, like a series, finite) are a major source of study in numerical analysis. See Difference methods.

How to Cite This Entry:
Error. Encyclopedia of Mathematics. URL:
This article was adapted from an original article by G.D. Kim (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article