# Interpolation in numerical mathematics

A method for approximating or precisely finding some quantity by known individual values of it or of other quantities related to it. On the basis of interpolation a whole series of approximate methods for solving mathematical problems has been developed.

Most significant in numerical mathematics is the problem of constructing means for the interpolation of functions. The interpolation of functionals and operators is also widely used in constructing numerical methods.

## The approximate representation and calculation of functions.

Interpolation of functions is considered as one of the ways of approximating them. Interpolating a function $ f ( x) $ on a segment $ [ a , b ] $ by its values at the nodes $ x _ {k} $ of a grid $ \Delta _ {n} = \{ a \leq x _ {0} < \dots < x _ {n} \leq b \} $ means constructing another function $ L _ {n} ( x) \equiv L _ {n} ( f ; x ) $ such that $ L _ {n} ( x _ {k} ) = f ( x _ {k} ) $, $ k = 0 \dots n $. In a more general setting, the problem of interpolating a function $ f ( x) $ consists of constructing $ L _ {n} ( x) $ not only by prescribing values on a grid $ \Delta _ {n} $, but also derivatives at individual nodes, up to a certain order, or by describing some other relation connecting $ f ( x) $ and $ L _ {n} ( x) $.

Usually $ L _ {n} ( x) $ is constructed in the form

$$ L _ {n} ( x) = \sum _ { i= } 0 ^ { n } a _ {i} \phi _ {i} ( x) , $$

where $ \{ \phi _ {i} ( x) \} $ is a certain system of linearly independent functions, chosen in advance. Such an interpolation is called linear with respect to $ \{ \phi _ {i} ( x) \} $, while $ L _ {n} ( x) $ is called an interpolation polynomial in the system $ \{ \phi _ {i} ( x) \} $ or an interpolation function.

The choice of $ \{ \phi _ {i} ( x) \} $ is determined by the properties of the function class for which the interpolation formula is constructed. E.g., for the approximation of $ 2 \pi $- periodic functions on $ [ 0 , 2 \pi ] $ one naturally takes the trigonometric system for $ \{ \phi _ {i} ( x) \} $, for the approximation of bounded or increasing functions on $ [ 0 , \infty ) $ one takes the system of rational or exponential functions, taking into account the behaviour of the functions to be approximated at infinity, etc.

Most often one uses algebraic interpolation: $ \phi _ {i} ( x) = x ^ {i} $; its simplest variant (linear interpolation with two nodes $ x _ {k} $ and $ x _ {k+} 1 $) is defined by the formula

$$ \tag{1 } L _ {1} ( x) = \ \frac{x - x _ {k} }{x _ {k+} 1 - x _ {k} } [ f ( x _ {k+} 1 ) - f ( x _ {k} ) ] + f ( x _ {k} ) , $$

$$ x _ {k} \leq x \leq x _ {k+} 1 . $$

Algebraic interpolation of a very high order is seldom used in practice in the problem of approximating functions on an entire segment $ [ a , b ] $. One usually restricts oneself to linear interpolation by (1) or to quadratic interpolation with three nodes on particular segments of the grid, by the formula

$$ \tag{2 } L _ {2} ( x) = \ \frac{( x - x _ {k} ) ( x - x _ {k+} 1 ) }{( x _ {k-} 1 - x _ {k} ) ( x _ {k-} 1 - x _ {k+} 1 ) } f ( x _ {k-} 1 ) + $$

$$ \frac{( x - x _ {k-} 1 ) ( x - x _ {k+} 1 ) }{( x _ {k} - x _ {k-} 1 ) ( x _ {k} - x _ {k+} 1 ) } f ( x _ {k} ) + \frac{( x - x _ {k-} 1 ) ( x - x _ {k} ) }{( x _ {k+} 1 - x _ {k-} 1 ) ( x _ {k+} 1 - x _ {k} ) } f ( x _ {k+} 1 ) , $$

$$ x _ {k-} 1 \leq x \leq x _ {k+} 1 . $$

There are several ways of writing the algebraic interpolation polynomials (cf. Interpolation formula). Interpolation by splines gains increasing application (cf. Spline; Interpolation spline)

Parabolic or cubic splines are most often used in practice. An interpolation spline of defect 1 for a function $ f( x) $ with respect to a given grid $ \Delta _ {n} $ is a function $ S _ {3} ( x) \equiv S _ {3} ( f ; x ) $ that is a polynomial of degree three on each segment $ [ x _ {k-} 1 , x _ {k} ] $, belongs to the class of twice continuously-differentiable functions, and satisfies the conditions

$$ S _ {3} ( x _ {k} ) = f ( x _ {k} ) ,\ \ k = 0 \dots n ; \ n \geq 2 . $$

There are still two free parameters in this definition; these are determined by additional boundary conditions: $ S _ {3} ^ {(} i) ( a) = S _ {3} ^ {(} i) ( b) $, $ i = 1 , 2 $; $ S _ {3} ^ \prime ( a) = a _ {n} $, $ S _ {3} ^ \prime ( b) = b _ {n} $, or other conditions.

As well as immediately in the problem of approximating functions, splines are also used in solving other problems; the splines are required then not only to coincide on a grid $ \Delta _ {n} $ with the values of a function $ f ( x) $, but also with those of the derivatives of this function, up to a certain order.

When processing empirical data $ \{ y _ {k} \} $ one often determines the coefficients $ a _ {i} $ in $ L _ {n} ( x) $ by requiring

$$ S = \sum _ { k= } 1 ^ { m } [ y _ {k} - L _ {n} ( x _ {k} ) ] ^ {2} ,\ \ m \geq n , $$

to be minimal. The function $ L _ {n} ( x) $ thus constructed is called the interpolation function in the sense of least squares.

The interpolation of functions in several variables meets with a number of principal and numerical difficulties. E.g., in the case of algebraic interpolation the Lagrange interpolation polynomial of fixed degree need not, generally speaking, exist for an arbitrary system of different nodes. In particular, for a function $ f ( x , y ) $ in two variables such a polynomials $ L _ {n} ( x , y ) $ of total degree at most $ n $ can be constructed for nodes $ ( x _ {k} , y _ {k} ) $ only if these nodes do not lie on an algebraic curve of order $ n $.

Another approach to the interpolation of functions $ f ( x _ {1} \dots x _ {m} ) $ in several variables is that one first interpolates the function with respect to $ x _ {1} $ for fixed $ x _ {k} $, $ k = 2 \dots m $, then with respect to the next variable $ x _ {2} $ for fixed remaining nodes, etc. Now the interpolation polynomial $ L _ {n _ {1} \dots n _ {m} } ( x _ {1} \dots x _ {m} ) $ for $ f ( x _ {1} \dots x _ {m} ) $ with nodes

$$ ( x _ {1} ^ {k _ {1} } \dots x _ {m} ^ {k _ {m} } ) , $$

$$ x _ {j} ^ \nu \neq x _ {j} ^ \mu ,\ \nu \neq \mu ; \ \ k _ {j} = 0 \dots n _ {j} ; \ j = 1 \dots m , $$

has the form:

$$ L _ {n _ {1} \dots n _ {m} } ( x _ {1} \dots x _ {m} ) = $$

$$ \sum _ { k _ {1} \dots k _ {m} = 0 } ^ { {n _ 1 } \dots n _ {m} } \frac{\omega _ {n _ {1} } ( x _ {1} ) \dots \omega _ {n _ {m} } ( x _ {m} ) }{\omega _ {n _ {1} } ^ \prime ( x _ {1} ^ {k _ {1} } ) {} \dots \omega _ {n _ {m} } ^ \prime ( x _ {m} ^ {k _ {m} } ) } \frac{ f ( x _ {1} ^ {k _ {1} } \dots x _ {m} ^ {k _ {m} } ) }{( x _ {1} - x _ {1} ^ {k _ {1} } ) \dots ( x _ {m} - x _ {m} ^ {k _ {m} } ) } , $$

where

$$ \omega _ {n _ {j} } ( x _ {j} ) = \ \prod _ { k _ {j} = 0 } ^ { {n _ j } } ( x _ {j} - x _ {j} ^ {k _ {j} } ) ,\ \ j = 1 \dots m . $$

Interpolation splines for functions of several variables are defined on a multi-dimensional grid, with corresponding changes, in analogy with the one-dimensional case. Interpolation of functions is used for replacing complicate functions by simpler ones that can be calculated quicker; for the approximate recovery of functions on their entire domain of values by their values at individual points; and for obtaining smoother functions described by a running process. This kind of problems is of both independent interest and arises in an auxiliary fashion in many branches of science and technology in solving complex problems. Interpolation of functions is also used in approximately finding limit values of functions, in problems of accelerating the convergence of series or sequences, etc.

## Numerically solving systems of non-linear equations.

The general ideas for constructing interpolation methods for solving an equation $ f ( x) = 0 $ or a system of equations $ f _ {i} ( x _ {1} \dots x _ {m} ) = 0 $, $ i = 1 \dots m $, are the same. The difficulties of the problem of interpolating functions of several variables especially arise when investigating and practically using this kind of methods for a large number of equations. The basic principle for obtaining interpolation methods for solving an equation $ f( x)= 0 $ is the replacement of $ f ( x) $ by its interpolation polynomials $ L _ {n} ( x) $ and subsequently solving $ L _ {n} ( x) = 0 $. The roots of $ L _ {n} ( x) = 0 $ are taken as approximate values of those of $ f ( x) = 0 $. The interpolation polynomial $ L _ {n} ( x) $ is also used in constructing iteration methods for solving $ f( x) = 0 $.

E.g., taking for $ x _ {n+} 1 $ the root of the linear algebraic interpolation polynomial constructed with respect to the values $ f ( x _ {n} ) $ and $ f ^ { \prime } ( x _ {n} ) $ at $ x _ {n} $, or with respect to the values $ f ( x _ {n-} 1 ) $ and $ f ( x _ {n} ) $ at $ x _ {n-} 1 $ and $ x _ {n} $, leads to the method of Newton (cf. Newton method) and the secant method, respectively:

$$ x _ {n+} 1 = x _ {n} - \frac{f ( x _ {n} ) }{f ^ { \prime } ( x _ {n} ) } ,\ \ x _ {n+} 1 = x _ {n} - \frac{f ( x _ {n} ) }{f ( x _ {n-} 1 , x _ {n} ) } , $$

where $ f ( x _ {n-} 1 , x _ {n} ) $ is the divided difference of $ f ( x) $ at $ x _ {n-} 1 $ and $ x _ {n} $. Under certain conditions, the sequence $ \{ x _ {n} \} $ converges, as $ n \rightarrow \infty $, to a solution of $ f ( x) = 0 $.

Another way to construct methods for solving an equation $ f ( x) = 0 $ is based on interpolating the inverse function $ x = g ( y) $. Suppose that for interpolating $ g ( y) $ the algebraic Lagrange interpolation polynomial

$$ L _ {n} ( y) = \ \sum _ { k= } 0 ^ { n } \frac{\omega _ {n} ( y) }{\omega _ {n} ^ \prime ( y _ {k} ) ( y - y _ {k} ) } g ( y _ {k} ) ,\ \ \omega _ {n} ( y) = \ \prod _ { k= } 0 ^ { n } ( y - y _ {k} ) , $$

is taken. It is assumed that the inverse function exists in a neighbourhood of the required root of $ f ( x) = 0 $ and that a table of values $ x _ {k} $ and $ y _ {k} = f ( x _ {k} ) $, $ k = 0 \dots n $, is available. The next approximate value $ x _ {n+} 1 $ is the value of the interpolation polynomial at zero:

$$ x _ {n+} 1 = - \omega _ {n} ( 0) \sum _ { k= } 0 ^ { n } \frac{x _ {k} }{\omega _ {n} ^ \prime ( y _ {k} ) y _ {k} } . $$

## Numerical integration.

The apparatus of interpolation lies at the basis of constructing a lot of quadrature and cubature formulas. Such formulas are constructed by replacing the functions in the integrands on the entire domain, or on a part of it, by interpolation polynomials of a certain kind and integrating these.

E.g., quadrature formulas of highest algebraic accuracy, also called Gauss quadrature formulas (cf. Gauss quadrature formula)

$$ \int\limits _ { a } ^ { b } p ( x) f ( x) d x \approx \ \sum _ { k= } 1 ^ { n } A _ {k} f ( x _ {k} ) , $$

where $ p ( x) $ is a sign-definite weight function, are obtained by replacing $ f ( x) $ by algebraic interpolation polynomials constructed with respect to the roots $ x _ {k} $ of the polynomial of degree $ n $ that is orthogonal to $ p ( x) $.

If one partitions the complete integration interval $ [ a , b ] $ in an even number $ n $ of equal parts of length $ h = ( b - a ) / n $ and if on each double part one replaces $ f ( x) $ by its quadratic interpolation polynomials with nodes at the end and middle points, then one is led to the compound Simpson formula

$$ \int\limits _ { a } ^ { b } f ( x) d x \approx \ \frac{b - a }{3 n } \times $$

$$ \times [ f _ {0} + f _ {n} + 2 ( f _ {2} + f _ {4} + \dots + f _ {n-} 2 ) + 4 ( f _ {1} + f _ {3} + \dots + f _ {n-} 1 ) ] , $$

where $ f _ {k} = f ( a + k h ) $, $ k = 0 \dots n $.

One may also take interpolation splines of some fixed degree as basis. The scheme for constructing formulas for the approximate computation of integrals explained above can be used in the multi-dimensional case as well.

## Numerical differentiation.

Formulas for numerical differentiation are obtained as results of differentiating interpolation formulas. Here, as a rule, certain a priori information is available about the function to be differentiated, related to its smoothness.

Let $ L _ {n} ( x) $ be a certain interpolation polynomial of a function $ f ( x) $, and let $ R _ {n} ( x) $ be the remainder in the interpolation formula

$$ f ( x) = L _ {n} ( x) + R _ {n} ( x) . $$

If in

$$ f ^ { ( i) } ( x) = L _ {n} ^ {(} i) ( x) + R _ {n} ^ { ( i) } ( x) $$

the quantity $ R _ {n} ^ { ( i) } ( x) $ is neglected, one obtains a formula for the approximate computation of the $ i $- th derivative of $ f ( x) $:

$$ \tag{3 } f ^ { ( i) } ( x) \approx L _ {n} ^ {(} i) ( x) . $$

Formulas for numerical differentiation, with interpolation at their basis, are obtained from (3) depending on the choice of $ L _ {n} ( x) $. For numerical differentiation one uses, as a rule, approximate values of the function at nodes; the error in a formula of numerical differentiation not only depends on the manner of interpolating and the interpolation step, but also on the errors in the values of the function at the nodes which are used. E.g., in the case of the linear approximation (1)

$$ \tag{4 } f ^ { \prime } ( x _ {k} ) \approx \ \frac{1}{h} [ f ( x _ {k+} 1 ) - f ( x _ {k} ) ] , $$

and the remainder $ R _ {1} ^ { \prime } ( x) $ has the representation

$$ R _ {1} ^ { \prime } ( x _ {k} ) = \ - \frac{1}{2} f ^ { \prime\prime } ( \xi ) h ,\ \ \xi \in ( x _ {k} , x _ {k+} 1 ) ,\ \ h = x _ {k+} 1 - x _ {k} . $$

If $ f ( x _ {k+} 1 ) $ and $ f ( x _ {k} ) $ are known with respective errors $ \epsilon _ {k+} 1 $, $ \epsilon _ {k} $, $ \epsilon _ {k+} 1 \neq \epsilon _ {k} $, then the error in (4) will contain yet another term $ ( \epsilon _ {k+} 1 - \epsilon _ {k} ) / h $, which decreases as the step $ h $ increases. When using formulas for numerical differentiation, the interpolation step must be in accordance with the errors in the values of the functions. Therefore, in practice it often happens that a function known to have some error on a dense grid is interpolated on a more coarse grid only.

## Numerically solving integral equations.

The unknown function $ \phi ( x) $ in the integral equation is replaced by some interpolation formula (an interpolation polynomial, an interpolation spline, etc.) with nodes $ x _ {k} $, and an approximate value $ \phi ( x _ {k} ) $ is determined from the system obtained after replacing the independent variable $ x $ by the nodes $ x _ {k} $.

E.g., for the linear Fredholm integral equation of the second kind

$$ \tag{5 } \phi ( x) = \lambda \int\limits _ { a } ^ { b } K ( x , s ) \phi ( s) d s + f ( x) $$

one can use the Lagrange form of the interpolation polynomial,

$$ \phi ( x) = \ \sum _ { k= } 0 ^ { n } l _ {k} ( x) \phi ( x _ {k} ) + R _ {n} ( x) \equiv L _ {n} ( x) + R _ {n} ( x) , $$

where $ R _ {n} ( x) $ is the remainder and

$$ l _ {k} ( x) = \ \frac{\omega _ {n} ( x) ( x - x _ {k} ) }{\omega _ {n} ^ \prime ( x _ {k} ) } ,\ \ \omega _ {n} ( x) = \prod _ { k= } 0 ^ { n } ( x - x _ {k} ) . $$

Replacing $ \phi ( x) $ in (5) by its interpolation polynomial $ L _ {n} ( x) $ and $ x $ by $ x _ {i} $, one obtains the linear system of equations

$$ \phi _ {i} = \lambda \sum _ { k= } 0 ^ { n } M _ {k} ( x _ {i} ) \phi _ {k} + f ( x _ {i} ) , $$

$$ M _ {k} ( x _ {i} ) = \int\limits _ { a } ^ { b } K ( x _ {i} , s ) l _ {k} ( s) d s ,\ i = 0 \dots n , $$

for determining the approximate value $ \phi _ {i} $ of $ \phi ( x) $ at $ x _ {i} $. In the case of non-linear integral equations the approximate value $ \phi _ {i} $ has to be determined from the corresponding non-linear system.

## Numerically solving differential equations.

One of the possibilities for constructing numerical methods for solving differential equations consists of replacing the derivatives of the unknown functions by interpolation formulas for numerical differentiation, and in a number of cases also in replacing by interpolation other functions and expressions occurring in the equation.

Suppose one has the following formula for numerical differentiation with equally-spaced nodes $ x _ {k} = x _ {0} + n h $:

$$ \tag{6 } y ^ \prime ( x _ {k} ) \approx \ \frac{y ( x _ {k+} 1 ) - y ( x _ {k} ) }{h} = \ \frac{1}{h} \Delta y ( x _ {k} ) , $$

$$ y ^ {\prime\prime} ( x _ {k} ) \approx \frac{1}{h ^ {2} } [ y ( x _ {k-} 1 ) - 2 y ( x _ {k} ) + y ( x _ {k+} 1 ) ] = \frac{1}{h ^ {2} } \Delta ^ {2} y ( x _ {k} ) , $$

obtained by differentiation of the formulas of linear and quadratic interpolation (1) and (2), respectively. Then for a second-order ordinary differential equation

$$ F ( x , y , y ^ \prime , y ^ {\prime\prime} ) = 0 $$

one obtains, under certain additional conditions, taking (6) into account, the finite-difference equation

$$ F \left ( x _ {k} , y _ {k} ,\ \frac{1}{h} \Delta y _ {k} ,\ \frac{1}{h ^ {2} } \Delta ^ {2} y _ {k} \right ) = 0 . $$

In it, together with the equations obtained from the additional conditions, the approximate values $ y _ {k} $ of the solution $ y ( x) $ at nodes $ x _ {k} $ occur.

Often, reduction of partial differential equations to the corresponding finite-difference equations is often also carried out using formulas for numerical differentiation.

Interpolation methods are applied to solve differential equations written in integral form. E.g., to find an approximate solution of the Cauchy problem

$$ \tag{7 } y ^ \prime = f ( x , y ) ,\ \ y ( x _ {0} ) = y _ {0} , $$

at points $ x _ {k} = x _ {0} + k h $, $ k = 0 , 1 \dots $ one uses difference formulas of the form

$$ y _ {n+} 1 = y _ {n} + h \sum _ { j= } 1 ^ { n } B _ {j} f ( x _ {n-} j , y _ {n-} j ) , $$

obtained by replacing the integrand in

$$ y ( x _ {n+} 1 ) = \ y ( x _ {n} ) + \int\limits _ {x _ {n} } ^ { {x _ n+} 1 } f ( x , y ) d x $$

by an interpolation polynomial and subsequent integration. In particular, Adams' formula for first-order equations (7), Störmer's formula for second-order equations, etc., are obtained this way.

This approach makes it possible to construct numerical algorithms for a wide class of differential equations, including partial differential equations. The study of the solvability, exactness and stability of solutions of finite-difference equations that arise constitutes a fundamental and difficult part of the theory of numerically solving differential equations.

## Interpolation of operators and some general approaches to the construction of numerical methods.

The construction of numerical methods for solving mathematical problems written as $ A x = y $, where $ x $ and $ y $ are elements of certain sets $ X $ and $ Y $ and $ A $ is a given operator, consists of replacing $ X $, $ Y $ and $ A $, or only some of these three objects, by other objects that are convenient for calculating purposes. This replacement should be carried out in such a way that the solution to the new problem

$$ \widetilde{y} = \widetilde{A} \widetilde{x} , $$

consisting of finding $ \widetilde{y} $ or $ \widetilde{x} $, is in some sense close to the solution of the original problem. One of the methods for replacing $ A $ by an approximation $ \widetilde{A} $ is the use of interpolation of operators. The problem of the interpolation of operators has various formulations. A linear interpolation operator $ L _ {1} ( F ; x ) $ for a given operator is written as

$$ \tag{8 } L _ {1} ( F ; x ) = \ F ( x _ {0} ) + F ( x _ {0} , x _ {1} ) ( x - x _ {0} ) , $$

where $ x _ {0} $, $ x _ {1} $ are the interpolation nodes and $ F( x _ {0} , x _ {1} ) $ is the first-order divided-difference operator. The latter is defined as the linear operator for which

$$ F ( x _ {0} , x _ {1} ) ( x _ {0} - x _ {1} ) = \ F ( x _ {0} ) - F ( x _ {1} ) . $$

The given definition of the divided-difference operator can be made concrete in a number of cases. Using linear interpolation (8), the "secant method" for the equation $ F ( x) = 0 $ can be written as

$$ x _ {n+} 1 = x _ {n} - F ^ { - 1 } ( x _ {n-} 1 , x _ {n} ) F ( x _ {n} ) , $$

where $ F ^ { - 1 } ( x _ {n-} 1 , x _ {n} ) $ is the operator inverse to $ F ( x _ {n-} 1 , x _ {n} ) $.

The formulation of the problem of interpolation of functionals, which is of interest in the theory of approximation methods, is as follows. Let $ \{ \psi _ {i} ( x) \} _ {i=} 0 ^ {n} $ be some fixed functionals, or classes of functionals, defined on $ X $. A functional $ L _ {n} [ F ; x ] $ is called an interpolation functional polynomial for a given functional $ F ( x) $ and system of points $ \{ x _ {k} \} $ in $ X $ if the relations

$$ L _ {n} [ F ; x _ {k} ] = F ( x _ {k} ) ,\ \ L _ {n} [ \psi _ {i} ; x ] \equiv \psi _ {i} ( x) , \ i = 0 \dots n , $$

hold.

Interpolation of functionals is used in the construction of approximate methods for computing continual integrals, in finding extrema of functionals, and in a number of other cases.

E.g., approximate interpolation formulas for computing continual integrals have the form

$$ \int\limits _ { X } F ( x) d \mu ( x) \approx \ \int\limits _ { X } L _ {n} [ F ; x ] d \mu ( x) , $$

where the integral over the interpolation polynomial $ L _ {n} [ F ; x ] $ with respect to a certain measure $ \mu $ can be computed exactly or can be reduced to a finite-dimensional integral. When $ X $ is the space $ C [ a , b ] $ of continuous functions on an interval $ [ a , b ] $, then $ L _ {1} [ F ; x ] $ can be represented by a Stieltjes integral,

$$ L _ {1} [ F ; x ] = $$

$$ = \ F ( x _ {0} ) + \int\limits _ { a } ^ { b } \frac{x ( \tau ) - x _ {0} ( \tau ) }{x _ {1} ( \tau ) - x _ {0} ( \tau ) } d \tau F [ x _ {0} ( \cdot ) + \chi ( \cdot ,\ \tau ) ( x _ {1} ( \cdot ) - x ( \cdot ) ) ] , $$

where $ x _ {0} ( t) $, $ x _ {1} ( t) $ are the interpolation nodes while

$$ \chi ( t , \tau ) = \ \left \{ \begin{array}{ll} 1, & \tau > t , \\ 0, & \tau \leq t . \\ \end{array} \right .$$

If $ F ( x) $ is a constant or a linear functional, then $ L _ {1} [ F ; x ] \equiv F ( x) $.

The use of interpolation in finding extremal values of functionals may be illustrated by two interpolation analogues of the gradient method for finding a local unconditional minimum of a functional $ F ( x) $ that is defined on some Hilbert space. The first analogue is obtained if, in the gradient method, $ \mathop{\rm grad} F ( x _ {n} ) $ is replaced by $ F ( x _ {n-} 1 , x _ {n} ) $, i.e.

$$ \tag{9 } x _ {n+} 1 = x _ {n} - \epsilon _ {n} F ( x _ {n-} 1 , x _ {n} ) ,\ \ \epsilon _ {n} > 0 ,\ \ n = 1 , 2 ,\dots . $$

The second analogue uses the gradient of the interpolation polynomials. From approximations $ x _ {n-} 2 $, $ x _ {n-} 1 $, $ x _ {n} $ to an extremum $ x ^ {*} $ of $ F ( x) $ one constructs the quadratic interpolation polynomial

$$ L _ {2} [ F ; x ] = \ F ( x _ {n} ) + F ( x _ {n-} 1 , x _ {n} ) ( x - x _ {n} ) + $$

$$ + F ( x _ {n-} 2 , x _ {n-} 1 , x _ {n} ) ( x - x _ {n-} 1 ) ( x - x _ {n} ) , $$

where $ F ( x _ {n-} 2 , x _ {n-} 1 , x _ {n} ) $ is the second-order divided difference of $ F ( x) $ with respect to $ x _ {n-} 2 $, $ x _ {n-} 1 $, $ x _ {n} $. The new approximation $ x _ {n+} 1 $ is determined by

$$ \tag{10 } x _ {n+} 1 = x _ {n} - \epsilon _ {n} \mathop{\rm grad} L _ {2} [ F ; x _ {n} ] , $$

$$ \epsilon _ {n} > 0 ,\ n = 2 , 3 ,\dots . $$

The interpolation methods (9), (10) use two, respectively, there, initial approximations.

The use of interpolation of operators and functionals in the construction of computational algorithms for solving concrete problems is based on the use of interpolation formulas with a small error. Such a kind or formulas must be constructed for individual classes of functionals and operators, taking specific features of these classes into account.

#### References

[1] | V.L. Goncharov, "The theory of interpolation and approximation of functions" , Moscow (1954) (In Russian) |

[2] | I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian) |

[3] | N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian) |

[4] | V.I. Krylov, V.V. Bobkov, P.I. Monastyrnyi, "Numerical methods" , 1–2 , Moscow (1976–1977) (In Russian) |

[5] | S.B. Stechkin, Yu.N. Subbotin, "Splines in numerical mathematics" , Moscow (1976) (In Russian) |

[6] | A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian) |

[7] | L.A. Yanovich, "Approximate computation of continual integrals with respect to Gaussian measures" , Minsk (1976) (In Russian) |

#### Comments

A detailed theoretical treatment of interpolation theory is provided by the monograph [a2], while applications of interpolation in numerical mathematics can be found in [a1].

#### References

[a1] | F.B. Hildebrand, "Introduction to numerical analysis" , McGraw-Hill (1974) |

[a2] | P.J. Davis, "Interpolation and approximation" , Dover, reprint (1975) pp. 108–126 |

[a3] | J.C. Mason (ed.) M.G. Cox (ed.) , Algorithms for approximation , Clarendon Press (1987) |

**How to Cite This Entry:**

Interpolation in numerical mathematics.

*Encyclopedia of Mathematics.*URL: http://encyclopediaofmath.org/index.php?title=Interpolation_in_numerical_mathematics&oldid=47393