# Approximation of functions, direct and inverse theorems

Theorems and equalities establishing a connection between the difference-differential properties of the function to be approximated and the magnitude (and behaviour) of the error of approximation, by various methods. Direct theorems give an estimate of the error of approximation of a function in terms of its smoothness properties (the existence of derivatives of a given order, the modulus of continuity of $f$ or of some of its derivatives, etc.). In the case of best approximation by polynomials, direct theorems are also known as Jackson-type theorems , together with their many generalizations and refinements (see Jackson inequality and Jackson theorem). Inverse theorems characterize difference-differential properties of functions depending on the rapidity with which the errors of best, or any other, approximations tend to zero. The problem of obtaining inverse theorems in the approximation of functions was first stated, and in some cases solved, by S.N. Bernstein [S.N. Bernshtein] . A comparison of direct and inverse theorems allows one sometimes to characterize completely the class of functions having specific smoothness properties using, for instance, sequences of best approximations.

The connection between direct and inverse theorems is most simple in the periodic case. Let $\widetilde{C}$ be the space of $2 \pi$-periodic continuous functions on the whole real axis with norm

$$\| f \| _ {\widetilde{C} } = \ \max _ { t } \ | f (t) | ;$$

let

$$E ( f , T _ {n} ) = \ \inf _ {\phi \in T _ {n} } \ \| f - \phi \| _ {\widetilde{C} }$$

be the best approximation of a function $f$ in $\widetilde{C}$ by the subspace $T _ {n}$ of trigonometric polynomials of degree at most $n$, let $\omega ( f , \delta )$ be the modulus of continuity of $f$, and let $\widetilde{C} {} ^ {r}$, $r = 1 , 2, \dots$ be the set of functions in $\widetilde{C}$ ($\widetilde{C} {} ^ {0} = \widetilde{C}$) that are $r$ times continuously differentiable on the whole real axis. A direct theorem states: If $f \in \widetilde{C} {} ^ {r}$, then

$$\tag{1 } E ( f , T _ {n-1} ) \leq \ \frac{M}{n ^ {r} } \omega \left ( f ^ { (r) } , \frac{1}{n} \right ) ,$$

$$n = 1 , 2, \dots \ r = 0 , 1, \dots$$

where the constant $M$ does not depend on $n$. A stronger assertion is: It is possible to indicate a sequence of linear methods $U _ {n}$, $n = 0 , 1, \dots$ associating a polynomial $U _ {n} ( f , t ) \in T _ {n}$ to a function $f (t)$ in $\widetilde{C}$ and such that for $f \in \widetilde{C} {} ^ {r}$ the error $\| f - U _ {n} (f) \| _ {\widetilde{C} }$ is bounded by the right-hand side of (1). An inverse theorem states that for $f \in \widetilde{C}$

$$\tag{2 } \omega ( f , \delta ) \leq \ M \delta \sum _ { n=1 } ^ { {[ } 1 / \delta ] } E ( f , T _ {n-1} ) ,\ \ \delta > 0 ,$$

where $M$ is an absolute constant and $[ 1 / \delta ]$ is the integer part of $1 / \delta$. And if for some positive integer $r$ the series

$$\sum _ { n=1 } ^ \infty n ^ {r-1} E ( f , T _ {n-1} )$$

converges, then it follows that $f \in \widetilde{C} {} ^ {r}$ and that, analogous to (2), $\omega ( f ^ { (r) } , 1 / n )$ can be estimated in terms of $E ( f , T _ {n-1} )$, $n = 1 , 2 ,\dots$ (see ,  and ). These estimates imply, in particular, that if

$$E ( f , T _ {n-1} ) = \ O ( n ^ {- r - \alpha } ) ,\ \ r = 0 , 1, \dots \ 0 < \alpha < 1 ,$$

then $f \in \widetilde{C} {} ^ {r}$ and the $f ^ { (r) } (t)$ satisfy for $0 < \alpha < 1$ the Hölder inequality

$$\tag{3 } | f ^ { (r) } ( t ^ \prime ) - f ^ { (r) } ( t ^ {\prime\prime} ) | \leq K \ | t ^ \prime - t ^ {\prime\prime} | ^ \alpha ,$$

whereas for $\alpha = 1$ they satisfy the Zygmund inequality

$$\tag{4 } \left | f ^ { (r) } ( t ^ \prime ) - 2 f ^ { (r) } \left ( \frac{t ^ \prime + t ^ {\prime\prime} }{2} \right ) + f ^ { (r) } ( t ^ {\prime\prime} ) \right | \leq K | t ^ \prime - t ^ {\prime\prime} | .$$

Denoting this class of functions by $K \widetilde{H} {} ^ {r + \alpha }$, one obtains a constructive characteristic for it: $f \in K \widetilde{H} {} ^ {r + \alpha }$ if and only if

$$E ( f , T _ {n-1} ) = \ O ( n ^ {- r - \alpha } ) .$$

A $2 \pi$-periodic function is infinitely differentiable on the whole real axis if and only if

$$\lim\limits _ {n \rightarrow \infty } \ n ^ {r} E ( f , T _ {n-1} ) = 0$$

for all $r = 1 , 2 ,\dots$.

Similar properties hold for the approximation of periodic functions in the metric of $L _ {p} ( 0 , 2 \pi )$ ($1 \leq p < \infty$), and also for (not necessarily periodic) functions defined on the whole real axis when being approximated by entire functions of finite order (see  and ). Direct and inverse theorems are known for $f \in \widetilde{C} {} ^ {r}$; as a difference-differential characteristic the modulus of smoothness $\omega _ {k} ( f , \delta )$ of order $k = 1 , 2, \dots$ of $f$ (or of some of its derivatives) is used (see  and ).

The situation is different in the case of approximation on a finite interval. Let $C = C [ a , b ]$ be the space of continuous functions on $[ a , b ]$, with norm

$$\| f \| _ {C} = \ \max _ {a \leq t \leq b } \ | f (t) | .$$

Let $C ^ {r} = C ^ {r} [ a , b ]$ be the set of the functions that are $r$ times continuously differentiable on $[ a , b ]$, $C ^ {0} = C$, and let $K H ^ {r + \alpha } [ a , b ]$ be the class of functions defined by the inequalities (3) and (4) with $t ^ \prime , t ^ {\prime\prime} \in [ a , b ]$. For the best approximation,

$$E ( f , A _ {n-1} ; a , b ) = \ \inf _ {p \in A _ {n-1} } \ \| f - p \| _ {C} ,\ \ n = 1 , 2, \dots$$

of a function $f \in C ^ {r}$ by the subspace $A _ {n-1}$ of algebraic polynomials of degree at most $n - 1$ one has an estimate of the form (1) in terms of the modulus of continuity of $f ^ { (r) }$ on $[ a , b ]$, but the converse, analogous to that of the periodic case (with an inequality of the form (2)), now only applies for intervals contained in $( a , b )$. For instance, if

$$\tag{5 } E ( f , A _ {n-1} ; a , b ) \leq M n ^ {- r - \alpha } ,$$

$$r = 0 , 1, \dots \ 0 < \alpha < 1 ,$$

then one can only assert that $f$ belongs to $K H ^ {r + \alpha } [ a _ {1} , b _ {1} ]$ defined by the inequalities (3) (with $0 < \alpha < 1$) merely on the interval $[ a _ {1} , b _ {1} ] \subset ( a , b )$; the constant $K$ depends on $a , a _ {1} , b _ {1} , b$, and can increase unboundedly as $a _ {1} \rightarrow a$ and $b _ {1} \rightarrow b$. There exist functions not belonging to $K H ^ {r + \alpha } [ a , b ]$ for which nevertheless

$$E ( f , A _ {n-1} ; \ a , b ) = O ( n ^ {- r - \alpha } ) .$$

For instance

$$E ( \sqrt {1 - t ^ {2} } ,\ A _ {n-1} ; - 1 , 1 ) < \ \frac{2}{\pi n } ,\ \ n = 1 , 2, \dots$$

although $\sqrt {1 - t ^ {2} } \notin K H ^ \alpha$ on $[ - 1 , 1 ]$ for every $\alpha > 1 / 2$. It turned out that algebraic polynomials, retaining on the whole interval $[ a , b ]$ the best order of approximation of a function $f \in C$, can yield at the end points of the interval a substantially better approximation. This phenomenon was first discovered by S.M. Nikol'skii (see ). If, in particular, $f \in K H ^ {r + \alpha } [ a , b ]$, then for any $n > r$ there is a polynomial $p _ {n} (t) \in A _ {n-1}$ such that

$$\tag{6 } | f (t) - p _ {n} (t) | \leq M \left [ \frac{1}{n} \sqrt {( t - a ) ( b - t ) } + \frac{1}{n ^ {2} } \right ] ^ {r + \alpha } ,$$

$$a \leq t \leq b ,$$

where the constant $M$ depends neither on $n$ nor on $t$. Contrary to (5), this assertion has a converse: If for an $f \in C$ there exists a sequence of polynomials $p _ {n} (t) \in A _ {n-1}$ such that for some $r = 0 , 1, \dots$ and $0 < \alpha < 1$ (6) is satisfied, then $f \in K H ^ {r + \alpha } [ a , b ]$. Direct and inverse theorems for $f \in C ^ {r} [ a , b ]$ are known, stated in terms of the modulus of continuity and moduli of smoothness (see  and ).

Direct theorems giving order estimates for the error of approximation in terms of difference-differential properties of the approximated function were established for many concrete approximation methods (see ,  and , in particular for splines, splines of best approximation and interpolation , cf. Spline).

Direct and inverse theorems for approximation in a Hausdorff metric are known (see ). Some peculiarities arise here; in particular, the characterization of classes of functions by their best Hausdorff approximation is connected not only with the order of this approximation but also with the magnitude of the constant in the relevant inequality. For direct and inverse theorems in the multi-dimensional case, see Approximation of functions of several real variables.

How to Cite This Entry:
Approximation of functions, direct and inverse theorems. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Approximation_of_functions,_direct_and_inverse_theorems&oldid=52119
This article was adapted from an original article by N.P. Korneichuk (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article