# Differentiation, numerical

Finding the derivative of a function by numerical methods. Such differentiation is resorted to when the methods of differential calculus are inapplicable (the function is obtained from tables) or involves considerable difficulties (the analytic expression of the function is complicated).

Let a function $u$ be defined on an interval $[ a , b ]$ and let the nodal points $x _ {i}$, $a = x _ {1} < x _ {2} < \dots < x _ {n} = b$, be given. The totality of points $( x _ {i} , u _ {i} = u ( x _ {i} ) )$, $i = 1 \dots n$, is known as a table. The result of numerical differentiation of the table is the function $u _ {n} ^ {k} ( x)$ which approximates, in some sense, the $k$- th derivative ${d ^ {k} u ( x) } / {dx ^ {k} }$ of the function $u$ on some sets $X _ {n} ^ {k}$ of points $x$. The use of numerical differentiation will be expedient if only an insignificant amount of computational effort is required to obtain the function $u _ {n} ^ {k} ( x)$ for each $x \in X _ {n} ^ {k}$. Linear methods of numerical differentiation are commonly employed; the result thus obtained is written in the form

$$\tag{1 } u _ {n} ^ {k} ( x) = \sum _ {i = 1 } ^ { n } u _ {i} a _ {i} ^ {k} ( x) ,$$

where $a _ {i} ^ {k} ( x)$ are functions defined on $X _ {n} ^ {k}$. The most popular method for obtaining formulas (1) is as follows: One constructs the function

$$u _ {n} ( x) = \sum _ {i = 1 } ^ { n } u _ {i} a _ {i} ( x) ,$$

interpolating $u ( x)$, and assumes that

$$u _ {n} ^ {k} ( x) \equiv \frac{d ^ {k} u _ {n} ( x) }{d x ^ {k} } = \sum _ {i = 1 } ^ { n } u _ {i} \frac{d ^ {k} a _ {i} ( x) }{d x ^ {k} } .$$

The accuracy of the algorithms based on the interpolation formulas of Lagrange, Newton and others strongly depends on the selection of the manner of interpolation, and may sometimes be very low even if the function $u$ is sufficiently smooth and if the number of nodal points is large . Algorithms of numerical differentiation involving spline-interpolation  are often free from this disadvantage. If all that is needed is the computation of the approximate values of the derivative at the nodal points $x _ {i}$ only, formula (1) assumes the form

$$\tag{2 } u _ {n} ^ {k} ( x _ {j} ) = \sum _ {i = 1 } ^ { n } u _ {i} a _ {ij} ^ {k}$$

and $u _ {n} ^ {k} ( x _ {j} )$ is fully defined by specifying a coefficient matrix $a _ {ij} ^ {k}$ for a given $k$. Formulas such as (2) are known as difference formulas for numerical differentiation. The coefficients $a _ {ij} ^ {k}$ of such formulas are determined from the condition that the difference

$$\frac{d ^ {k} u ( x _ {j} ) }{d x ^ {k} } - \sum _ {i = 1 } ^ { n } u _ {i} a _ {ij} ^ {k} = \xi _ {j} ^ {n}$$

has highest order of smallness with respect to $h _ {n} = \max _ {i} | x _ {i+} 1 - x _ {i} |$. As a rule, formulas (2) are very simple and easy to handle. Thus, if $h = h _ {n} = ( x _ {2} - x _ {1} ) = \dots = ( x _ {n} - x _ {n-} 1 )$, they assume the form

$$\frac{du ( x _ {j} ) }{dx} = \frac{u _ {j+} 1 - u _ {j} }{h} + O ( h _ {n} ) = \frac{u _ {j} - u _ {j-} 1 }{h} + O ( h _ {n} ) ,$$

$$\frac{d u ( x _ {j} ) }{dx} = \frac{u _ {j+} 1 - u _ {j-} 1 }{2h} + O ( h ^ {2} ) ,$$

$$\frac{d ^ {2} u ( x _ {j} ) }{dx} ^ {2} = \frac{u _ {j+} 1 - 2 u _ {j} + u _ {j-} 1 }{h ^ {2} } + O ( h ^ {2} ) .$$

Numerical differentiation algorithms are often employed with tables in which the values of $u ( x _ {i} )$ are given (or obtained) inaccurately. In such cases there is need for a preliminary smoothing, since a direct application of the formula may result in large errors in the results .

How to Cite This Entry:
Differentiation, numerical. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Differentiation,_numerical&oldid=46696
This article was adapted from an original article by V.A. Morozov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article