From Encyclopedia of Mathematics
Jump to: navigation, search

2020 Mathematics Subject Classification: Primary: 65G [MSN][ZBL]

rounding, a number

An approximate representation of a number in a certain number system using a finite number of digits. The need for rounding-off is dictated by the demands of the calculations in which, as a rule, the final result cannot be obtained with complete accuracy, and fruitless notation of superfluous digits must be avoided, with all numbers being limited by only the necessary number of symbols.

The number rounded-off is replaced by another number (said to be $t$-digited, i.e. having $t$ digits) representing it approximately. The error arising from this is called the rounding-off error (round-off error, rounding error).

Various methods are used for rounding-off a number. The simplest is to discard the least significant digits of the number, beyond $t$ places. The absolute rounding-off error in this instance does not exceed the unit of the $t$-digit of the number. The method usually used in hand calculations is to round off a number to the nearest $t$-digit number. The absolute error in this case does not exceed half the $t$-digit of the number being rounded off. This method gives the minimum possible error of all rounding-off methods using $t$-digits.

The rounding-off methods used by a computer are determined by the purpose for which it was designed and its technical capabilities and, as a rule, depending on the accuracy of the rounding off, they yield the nearest $t$-digit number. Computers used to be based on either one of two main systems of arithmetic calculation: the floating-point system and the fixed-point system. In the floating-point system (which is almost exclusively used nowadays), the result of rounding-off a number has a definite number of significant digits; in the fixed-point system, there is a definite number of digits after the decimal point. In the first case it is customary to talk of rounding-off to $t$ digits, in the second, of rounding-off to $t$ digits after the decimal point. In the first case the relative rounding-off error is controlled, in the second the error is absolute.

In connection with the use of computers, research has been done concerning accumulated errors in larger calculations. Analysis of accumulated errors in numerical methods permits one to describe methods according to their susceptibility to rounding-off errors, to create strategies which bring the methods into computing practice, to evaluate the rounding-off errors, and to estimate the accuracy of the final result.


[1] A.N. Krylov, "Lectures on approximate calculations" , Moscow-Leningrad (1950) (In Russian)
[2] I.S. Berezin, N.P. Zhidkov, "Computing methods" , 1 , Pergamon (1973) (Translated from Russian)
[3] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)
[4] V.V. Voevodin, "Computational foundations of linear algebra" , Moscow (1977) (In Russian)
[a1] J.H. Wilkinson, "Rounding errors in algebraic processes" , Prentice-Hall (1963)
How to Cite This Entry:
Rounding-off. Encyclopedia of Mathematics. URL:
This article was adapted from an original article by G.D. Kim (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article