# Approximation of functions

The replacement, according to a definite rule, of a function by a function $\phi$ close to it in some sense and belonging to a set $\mathfrak N$( the approximating set) that is prescribed in advance. It is assumed that $f$ is defined on a set $Q$ in $m$- dimensional Euclidean space (the real axis being a particular case) in which the approximation is carried out; $f$ can either be given explicitly in terms of elementary functions or $f$ can be the solution of some equation. If only incomplete information about $f$ is available, the approximation problem is in fact a matter of approximating the whole class of functions defined by the information available.

In practice, the necessity to approximate functions arises in various situations when it is needed to replace a function by a smoother function or by one which is simpler and more convenient in computations, or when it is required to establish a functional dependence on the basis of experimental data, etc.

In the general problem of the approximation of functions it is usually possible to single out the following more particular problems: the choice of the approximating set $\mathfrak N$; the choice of the measure of the error of approximation; the choice of the method of approximation, e.g. the rule according to which a function $\phi$ in $\mathfrak N$ is made to correspond to $f$; and the investigation and estimation of the error of the approximation.

With respect to the choice of the approximating set $\mathfrak N$, apart from the necessity of ensuring the required accuracy of the approximation, one is guided by the aim to deal with structurally simple functions $\phi$ that are convenient in computations and that can be submitted to a priori specified conditions, e.g. related to smoothness.

The classical tools for approximation consist of algebraic polynomials (if $Q$ is a bounded closed set) and trigonometric polynomials (in the periodic case) in one or more variables. Their widespread application as approximating sets is due, in particular, to the fact that it is in principle possible to approximate any continuous function by means of algebraic of trigonometric polynomials with any degree of accuracy specified in advance. The accuracy of the approximation can be increased at the expense of increasing the degree of the polynomial; however, this complicates the approximating process and increases the difficulties of applying it. In practice one takes subspaces of algebraic or trigonometric polynomials of a fixed degree as approximating sets and one aims at obtaining the accuracy required while keeping the degrees of the polynomials used as low as possible. A more general, and at the same time more flexible, approximation tool is obtained by considering generalized polynomials

$$\phi (t) = c _ {1} \phi _ {1} (t) + \dots + c _ {N} \phi _ {N} (t) ,$$

where $\{ \phi _ {1} \dots \phi _ {N} \}$ is a system of linearly independent functions which should be chosen such that the conditions of the problem at hand and the a priori conditions on $\phi$ are taking into account.

In many problems splines (cf. Spline) have been found to be more natural and more convenient from the viewpoint of computation than classical polynomials. If

$$\tag{1 } \Delta _ {N} = \ \{ a = \tau _ {0} < \tau _ {1} < \dots < \tau _ {N} = b \} ,\ \ N \geq 2 ,$$

is a fixed subdivision of the interval $[ a , b ]$, then a (polynomial) spline of degree $r$ and defect $k$( $k = 1 \dots r$) with respect to $\Delta _ {N}$ is a function $s$ composed of algebraic polynomials of degree $r$" tied together" at the points $\tau _ {1} \dots \tau _ {N-1}$ in such a way that on the whole interval $[ a , b ]$ the spline $s$ is continuous together with its derivatives up to and including order $r - k$. Thus $s \in C ^ {r-k} [ a , b ]$ and on each interval $( \tau _ {i-1} , \tau _ {i} )$( $i = 1 \dots N$) $s$ is an algebraic polynomial of degree at most $r$. For instance, a polygonal line with corners at the nodes (knots, joints) $\tau _ {i}$ is a spline of degree one and defect one; a function $s$ that is continuously differentiable on $[ a , b ]$ and that coincides on $[ \tau _ {i-1} , \tau _ {i} ]$, $i = 1 \dots N$, with a cubic polynomial is a cubic spline of defect 2, etc. Splines in two or more variables are similarly defined. Having finite smoothness properties, splines have a greater local flexibility than polynomials; a change in the values of a spline in an interval $( \alpha , \beta )$ does not much affect (if at all) its behaviour outside $( \alpha , \beta )$. Apart from the simplicity of their implementation in computers, the advantages of splines show up, in particular, when the information about the function $f$ to be approximated has a discrete character, for instance, when the values of $f$ and, possibly, of some of its derivatives, are given at some points.

If $f$ has singularities or if the approximation is carried out on an unbounded domain, a convenient tool for approximation consists of rational fractions $p / q$, where $p$ and $q$ are algebraic polynomials (rational approximation, Padé approximation). Non-periodic functions defined on the whole real axis can also be approximated by entire functions of exponential type.

The measure of the error of approximation, $\mu ( f , \phi )$, is usually chosen by taking into account the condition of the concrete problem and the information available about the function to be approximated. Most frequently it is a matter of choosing a function space containing $f$ in whose metric it is expedient to estimate the error of approximation. If

$$\mu ( f , \phi ) = \ \sup _ {t \in Q } \ | f (t) - \phi (t) | ,$$

then one deals with uniform, or Chebyshev, approximation, while if

$$\mu ( f , \phi ) = \int\limits _ { Q } | f (t) - \phi (t) | ^ {p} \ d t ,\ p > 0 ,$$

then one has mean-power approximation, or $L _ {p}$- approximation, which in the case of $p = 1$ is called approximation in the mean. A particular important case is that of $p = 2$, mean-square approximation: the error of the best approximation of the function $f$ by a finite-dimensional subspace can be expressed exactly in terms of certain determinants. In some problems the requirements of proximity (closeness) of the functions $f$ and $\phi$ are different at different points; in order to take into account this lack of homogeneity a weight function $\rho (t) \geq 0$ is introduced and weighted approximation is considered, where the measure of the error of approximation is given by

$$\mu ( f , \phi ) = \ \sup _ {t \in Q } \ | f (t) - \phi (t) | \rho (t)$$

or

$$\mu ( f , \phi ) = \ \int\limits _ { Q } | f (t) - \phi (t) | ^ {p} \rho (t) d t ,\ \ p > 0 .$$

The weight function allows one also to see to it that the error is finite when, for instance, $f$ is unbounded. If the error of approximation has to take into account the closeness of $f$ and $\phi$ only at some individual points $t _ {k}$, $k = 1 \dots N$, in $Q$, then as $\mu ( f , \phi )$ one may take one of the quantities

$$\max _ {1 \leq k \leq N } \ | f ( t _ {k} ) - \phi ( t _ {k} ) |$$

or

$$\sum _ { k=1 } ^ { N } | f ( t _ {k} ) - \phi ( t _ {k} ) | ^ {p} ,\ \ p > 0 ,$$

into which weight coefficients can also be introduced.

When deciding according to which rule the approximating function $\phi (t) = \phi ( f , t )$ should be chosen from the set $\mathfrak N$( when chosing the method of approximation) it is evident that one should strive for the highest possible accuracy of approximation and at the same time for simplicity in the construction of $\phi ( f , t )$, taking account of the information available about $f$. The first requirement leads to a function $\phi _ {f}$ in $\mathfrak N$ that is "nearest" to $f$, i.e. $\phi _ {f}$ is such that

$$\mu ( f , \phi _ {f} ) = \ \inf _ {\phi \in \mathfrak N } \ \mu ( f , \phi ) .$$

Here immediately questions arise about the existence and the uniqueness of such a function $\phi _ {f}$( the function of best approximation), and also about its characteristic properties (see ). Existence is ensured if $\mathfrak N$ is a closed locally compact set, and in particular if $\mathfrak N$ is a finite-dimensional subspace. Uniqueness depends both on the properties of the approximating set $\mathfrak N$( see Haar condition; Chebyshev system) and on the metric which defines $\mu ( f , \phi )$. There is a number of well-known necessary and sufficient conditions which should be satisfied by the function $\phi _ {f}$ of best approximation in a variety of cases (see Polynomial of best approximation; Chebyshev theorem; Markov criterion). However, as a rule, these criteria do not yield effective algorithms to construct $\phi _ {f} (t)$. Hence it is of great importance to have methods available which, on the basis of the information about the function $f$ to be approximated, allow one to construct effectively some function $\phi ( f , t )$ in $\mathfrak N$ yielding an acceptable approximation. Here, in the first place, one ought to mention linear methods (these satisfy $\phi ( \alpha _ {1} f _ {1} + \alpha _ {2} f _ {2} , t ) = \alpha _ {1} \phi ( f _ {1} , t ) + \alpha _ {2} \phi ( f _ {2} , t )$), to which, in particular, interpolation methods belong: having prescribed the points $t _ {1} \dots t _ {N}$ in $Q$, one chooses $\phi ( f , t )$ from those functions $\phi$ in $\mathfrak N$ that satisfy the interpolation condition

$$\tag{2 } \phi ( t _ {k} ) = \ f ( t _ {k} ) ,\ \ k = 1 \dots N .$$

If $\mathfrak N$ is a linear manifold and if it contains a system of functions $\phi _ {1} \dots \phi _ {N}$ such that $\phi _ {i} ( t _ {i} ) = 1$ and $\phi _ {i} ( t _ {k} ) = 0$ when $k \neq i$, then the function

$$\phi ( f , t ) = \ \sum _ { k=1 } ^ { N } f ( t _ {k} ) \phi _ {k} (t)$$

belongs to $\mathfrak N$ and satisfies (2); it yields an interpolation method of approximation and is obviously linear. It may be required to choose the function $\phi ( f , t )$ in such a way that at the points $t _ {k}$ it coincides not only with $f$, but also with some its derivatives; this case is called interpolation with multiple nodes. If $Q = [ a , b ]$ and $a \leq t _ {1} < \dots < t _ {N} \leq b$, then there is a unique algebraic polynomial of degree $N - 1$, and in the periodic case ( $b - a = 2 \pi$, $t _ {N} = t _ {2m-1} < b$) — a unique trigonometric polynomial of degree $n - 1$, coinciding with $f$ at the points $t _ {k}$. Multiple interpolation is achieved with Hermite polynomials, a particular case of which is the Taylor polynomial; in this case the values of a function and of its first $n$ consecutive derivatives at a point are interpolated by an algebraic polynomial of degree $n$.

Interpolation with splines has its peculiarities connected with the choice of the interpolation nodes and with the boundary conditions ensuring the existence and uniqueness of the interpolating spline. For instance, a spline $s$ of degree $r \geq 2$ and defect 1 with respect to the subdivision (1) taking prescribed values at $N$ different points $t _ {i}$ of the interval $( a , b )$ such that $\tau _ {i-1} \leq t _ {i} \leq \tau _ {i}$, $i = 1 \dots N$, exists and is unique if the boundary conditions are given in terms of $m _ {a}$ conditions $s ^ {( \nu ) } (a)$, $0 \leq \nu \leq r - 1$, and $m _ {b}$ conditions $s ^ {( \mu ) } (b)$, $0 \leq \mu \leq r - 1$, with $m _ {a} + m _ {b} = r$. Functions $f \in G ^ {2r-k-1} [ a , b ]$ can be put in one-to-one correspondence with splines $s ( f , t )$ of degree $2 r - 1$ and defect $k$, $1 \leq k \leq r$, with respect to the subdivision (1) by requiring that $s ^ {( \nu ) } ( f , \tau _ {i} ) = f ^ {( \nu ) } ( \tau _ {i} )$, $\nu = 0 \dots k - 1$; $i = 1 \dots N$, where, if $k < r$, one also has to impose some boundary conditions. When $k = r$, this spline is called a Hermite or local spline, because the behaviour of the spline in an interval $( \tau _ {i-1} , \tau _ {i} )$ is determined by the values of the function $f$ and of its derivatives $f ^ { ( \nu ) }$, $\nu = 1 \dots k - 1$, at the points $\tau _ {i-1}$ and $\tau _ {i}$.

In the approximation of functions an important role is also played by linear methods constructed on the basis of an expansion of the function in a Fourier series with respect to an orthogonal system. In particular, in the periodic case a widely used approximation tool consists of Fourier sums, and their various means (the case of the trigonometric orthogonal system) (see Approximation of functions, linear methods).

The investigation and estimation of the error of approximation is important from the practical point of view and at the same time is the branch of the approximation of functions that is richest in ideas. More precisely, the development of methods dealing with estimating the error, the investigation of this dependence on the smoothness of the approximated function, as well as the study and comparison of the approximation properties of various approximation methods, have led to the creation of the approximation theory of functions, one of the most rapidly developing branches of mathematical analysis.

The foundation of the approximation theory of functions was laid in the papers of P.L. Chebyshev in 1854–1859 (see ) on best uniform approximation of continuous functions by polynomials and rational fractions and in the work of K. Weierstrass , who proved in 1885 that with any continuous function on an interval $[ a , b ]$, or on the whole real axis and having period $2 \pi$, there can be associated a sequence of algebraic (respectively, trigonometric) polynomials $P _ {n} ( f , t )$ of degree $n = 1 , 2 \dots$ such that, as $n \rightarrow \infty$,

$$\mu ( f , P _ {n} (f) ) = \ \max _ {t \in Q } \ | f (t) - P _ {n} ( f , t ) | \rightarrow 0 ,$$

where $Q$ is $[ a , b ]$, respectively the whole real axis. Similar assertions hold in the case when the measure of the error of approximation is defined by an integral metric, as well as in the case of functions of several variables. Particular importance has been given to the investigation of the speed with which the sequence $\mu ( f , P _ {n} (f) )$ decreases in dependence on the properties of the approximated function and the choice of the approximating polynomials $P _ {n} ( f , t )$. The most attention has been paid to the study of best approximation problems, and also to that of linear methods of approximation with the property that for a given $f$ the polynomial $P _ {n} ( f , t )$ can be constructed effectively. An important stage in the development of the approximation theory of functions is connected with the names of Ch.J. de la Vallée-Poussin, D. Jackson, and S.N. Bernstein [S.N. Bernshtein], who initiated the investigation of the relation between the speed of decrease of the approximation when approximating the function $f$ by means of suitably chosen polynomials $P _ {n} ( f , t )$ of degree $n$( as $n \rightarrow \infty$), and the difference-differential properties of $f$. It turns out that in many cases these properties of $f$, e.g., $f$ having derivatives, smoothness of those derivatives, etc., can be characterized by the sequence of approximating polynomials and by the behaviour of the corresponding error of approximation (see Approximation of functions, direct and inverse theorems). This yields a new, constructive characterization of continuous and differentiable functions. In the first third of the 20th century this kind of problem was dominating approximation theory; for this reason this field was also thought of as the constructive theory of functions.

In the 1930s and 1940s there appeared papers by A.N. Kolmogorov, J. Favard and S.M. Nikol'skii which initiated a new direction of research connected with the approximation of classes of functions by finite-dimensional subspaces and with obtaining accurate estimates of the error, using difference-differential properties defining the class. The objective is to obtain the quantities

$$\sup _ {\begin{array}{c} {} \\ f \in \mathfrak M \end{array} } \ \mu ( f , P _ {N} (f) ) ,$$

where $\mu ( g , \phi )$ is the chosen measure of the approximation error, $\mathfrak M$ is a class of functions and $P _ {n} ( f , t )$ is the approximating (generalized) polynomial whose coefficients are determined by the method of approximation. Results of this kind make it possible to compare approximation methods from the point of view of their approximating capabilities and to formulate the problem, important in applications, of determining for a given class of functions the optimal (best) approximating set (of fixed dimension $N$). Research in this direction, based both on the properties of specific approximation methods and on advanced results of functional analysis, turned out to be also very fertile in ideas, because, by proceeding in this way, new facts about the mutual connections between many extremal problems of various types were discovered, making it possible to obtain deep and delicate relations in the theory of functions. Owing to that it was possible to conclusively solve a number of extremal problems concerning the best approximation of important classes of functions (see  and , and also Approximation of functions, extremal problems in function classes).

## Contents

### Some other aspects of the approximation of functions.

Additional restrictions can be put on the approximating function $\phi (t) = \phi ( f , t )$ in $\mathfrak N$. If they are not connected with the function $f$( for instance, restrictions on, or relations between, the coefficients of the approximating polynomial), then it is really a matter of determining $\mathfrak N$ more precise. A new situation arises if the restrictions on $\phi ( f , t )$ are connected with the approximated function $f$; one of the interesting cases is that of one-sided approximation. Here $\phi ( f , t )$ in $\mathfrak N$ is required to satisfy the inequality $\phi ( f , t ) \leq f (t)$( or $\geq$) and the error is estimated in an integral metric (see ).

In applied problems, besides the case of explicity given functions, one may have to approximate curves and surfaces which are given only in parametric form; as tools of approximation one can use, for instance, parametric splines. It is most natural to measure the error of approximation by means of the Hausdorff distance, which properly takes into account the geometric proximity of such objects. For instance, for two curves $l _ {1}$ and $l _ {2}$, the Hausdorff distance is defined by

$$r ( l _ {1} , l _ {2} ) = \ \max \ \left \{ \max _ {P \in l _ {1} } \ \mathop{\rm min} _ {Q \in l _ {2} } \ \rho ( P , Q ) ,\ \max _ {P \in l _ {2} } \ \mathop{\rm min} _ {Q \in l _ {1} } \ \rho ( P , Q ) \right \} ,$$

where $\rho ( P , Q )$ is the Euclidean (or any other) distance between the points $P$ and $Q$. Here the Hausdorff distance is preferable when choosing the measure of the error of approximation; this is also the case in some cases of the approximation of functions, for instance when a discontinuous function has to be approximated by a smooth function (see ).

The solution of a number of problems in the theory of approximation of functions is closely connected with the study of extremal properties of polynomials in one or another concrete system of functions (inequalities for the derivatives of polynomials, polynomials least deviating from zero, etc.). In particular, the proofs of inverse theorems in the theory of approximation of functions strongly rely on inequalities giving estimates of the norm (or value at a fixed point) of some derivatives of an algebraic or trigonometric polynomial, using certain characteristics of the polynomial itself. In this direction a number of precise results is known, and these have generalizations to entire functions (see ).

The problem concerning an algebraic polynomial (with prescribed leading coefficient) least deviating from zero in the metric of $C$ or $L _ {p}$ on the interval $[ a , b ]$, and the equivalent problem concerning the best approximation of the function $t ^ {n}$ by a polynomial of degree at most $n - 1$, was investigated by Chebyshev (for $C$) and by his students (for $L _ {1}$). The solution is given by the Chebyshev polynomials of the first kind $(C)$ and the second kind $( L _ {1} )$ and by the Legendre polynomials $( L _ {2} )$, which have wide applications both in theoretical and applied investigations. A number of results is known for the more general case when all coefficients of the polynomial are subjected to some restrictions (see ). The problem concerning a mono-spline of minimal norm, equivalent to that of finding the best approximation of the function $t ^ {n}$ by splines of degree $n - 1$ with free knots, is of special importance in view of the fact that in a number of cases the problem of obtaining the best quadrature formula reduces to it.

For the approximation of functions in the complex plane see Approximation of functions of a complex variable.

How to Cite This Entry:
Approximation of functions. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Approximation_of_functions&oldid=45201
This article was adapted from an original article by N.P. Korneichuk (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article