Approximation of functions, linear methods

Approximation methods defined by linear operators. If in a linear normed function space $X$ a linear manifold (linear subspace) $\mathfrak N$ is chosen as approximating set, then any linear operator $U$ which transforms a function $f \in X$ into a function $U ( f , t ) = ( U f ) (t) \in \mathfrak N$ so that

$$U ( \alpha _ {1} f _ {1} + \alpha _ {2} f _ {2} , t ) = \ \alpha _ {1} U ( f _ {1} , t ) + \alpha _ {2} U ( f _ {2} , t )$$

(where $\alpha _ {1}$ and $\alpha _ {2}$ are arbitrary numbers), defines a linear approximation method for functions in $X$ by means of functions in $\mathfrak N$. A linear approximation method is called projective if $U ( f , t ) = f (t)$ for all $f$ in $\mathfrak N$; it is called positive if $U ( f , t ) \geq 0$ for non-negative functions $f$.

The finite-dimensional case is the most interesting one. Here $\mathfrak N = \mathfrak N _ {N}$ is an $N$- dimensional subspace, and then

$$\tag{1 } U ( f , t ) = \ U _ {N} ( f , t ) = \ \sum _ { k=1 } ^ { N } c _ {k} (f) \phi _ {k} (t) ,$$

where $\{ \phi _ {k} (t) \} _ {1} ^ {N}$ is a basis of $\mathfrak N _ {N}$ and $c _ {k}$ are certain linear functionals defined on $X$. The choice of the linearly independent system $\{ \phi _ {k} (t) \} _ {1} ^ {N}$ and the set of functionals $\{ c _ {k} \} _ {1} ^ {N}$ is determined by the information about the function that one intends to use while constructing the linear method. If $c _ {k} (f) = f ( t _ {k} )$, where $\{ t _ {k} \} _ {1} ^ {N}$ is a fixed system of points in the domain of definition of $f$, and if $\phi _ {k} ( t _ {i} ) = 0$ when $i \neq k$, while $\phi _ {k} ( t _ {k} ) = 1$, then $U _ {N} ( f , t _ {k} ) = f ( t _ {k} )$, $k = 1 \dots N$, and one has an interpolation method (for instance, Lagrange's interpolation polynomial, or an interpolation spline). If $X = H$ is a Hilbert space of functions and $c _ {k} (f)$ are the Fourier coefficients of a function $f$ with respect to an orthonormal system $\{ \phi _ {k} (t) \}$, then the sum at the right-hand side of (1) yields a linear method of orthogonal projection of $X$ onto $\mathfrak N _ {N}$; in this case

$$\| f - U _ {N} (f) \| _ {H} = \ \inf _ {\alpha _ {k} } \ \left \| f - \sum _ { k=1 } ^ { N } \alpha _ {k} \phi _ {k} \ \right \| _ {H} .$$

Thus, this method yields the best approximation of $f$ by linear combinations of the functions $\phi _ {k}$.

In the theory of linear approximation methods, special attention is given to problems of convergence. Let $X$ be a Banach space, let $\{ \phi _ {1} (t) , \phi _ {2} (t) ,\dots \}$ be a linearly independent system of functions on $X$, let $\mathfrak N _ {N}$ be the subspace generated by the first $N$ functions of this system, $N = 1 , 2 \dots$ and let $U _ {N}$ be bounded linear operators from $X$ into $\mathfrak N _ {N}$, $N = 1 , 2 ,\dots$. The convergence $U _ {N} ( f , t ) \rightarrow f (t)$( in the sense that $\| U _ {N} - f \| _ {X} \rightarrow 0$ as $N \rightarrow \infty$) for any function $f$ in $X$ holds if and only if: 1) the sequence of norms $\| U _ {N} \|$ of the operators $U _ {N}$ is bounded (see Banach–Steinhaus theorem); and 2) $U _ {N} ( f , t ) \rightarrow f(t)$ for all functions $f$ in a set $A$ that is everywhere dense in $X$. These conditions are satisfied, in particular, in the space of $2 \pi$- periodic functions $\widetilde{L} _ {p} = \widetilde{L} _ {p} [ 0 , 2 \pi ]$, with $1 < p < \infty$, and the operators $S _ {n}$ defined by the trigonometric Fourier sums

$$\tag{2 } S _ {n} ( f , t ) = \ \frac{a _ {0} }{2} + \sum _ { k=1 } ^ { n } ( a _ {k} \cos k t + b _ {k} \sin k t ) ,$$

$$n = 1 , 2 \dots$$

of $f$. As the set $A$ on which 2) is satisfied one can take the set of all trigonometric polynomials. If $X$ is the space $\widetilde{C} = \widetilde{C} [ 0 , 2 \pi ]$( of continuous functions on the whole real axis with period $2 \pi$) or if $X$ equals $\widetilde{L} _ {1}$, then $\| S _ {n} \| \sim \mathop{\rm ln} n$ as $n \rightarrow \infty$( see Lebesgue constants). Consequently, there are in $\widetilde{C}$ and in $\widetilde{L} _ {1}$ functions $f$ to which the sequence $\{ S _ {n} ( f , t ) \}$ does not converge in the appropriate metric. Since in the Banach space of functions $X$ with a norm that is invariant with respect to translations the linear operator $U _ {n} ^ \perp$ projecting $X$ onto the subspace $T _ {n}$ of trigonometric polynomials of degree at most $n$ satisfies the inequality $\| U _ {n} ^ \perp \| \geq \| S _ {n} \|$( see [3]), the divergence in $\widetilde{C}$ and $\widetilde{L} _ {1}$ also holds for the sequence $\{ U _ {n} ^ \perp \}$. In particular, this is true in $\widetilde{C}$ for the sequence of Lagrange interpolation operators with any triangular matrix of interpolation nodes. Similar facts of a negative character also hold in the non-periodic case for the linear projection operators of the spaces $C [ a , b ]$ and $L _ {1} [ a , b ]$ onto the subspaces $A _ {n}$, $n = 1 , 2 \dots$ of algebraic polynomials of degree at most $n$.

The divergence of the sequence (2) for some functions in $\widetilde{C}$ made it necessary to introduce and examine various averages of Fourier sums that do not have this disadvantage. Such are, for instance, the Fejér summation method and the Bernstein–Rogosinski summation method, which are particular cases (with a specific choice of the numerical coefficients $\lambda _ {k} ^ {(n)}$) of polynomials of the form

$$\tag{3 } U _ {n} ^ \lambda ( f , t ) = \ \frac{a _ {0} }{2} + \sum _ { k=1 } ^ { n } \lambda _ {k} ^ {(n)} ( a _ {k} \cos kt + b _ {k} \sin kt ) ,$$

$$n = 1 , 2 ,\dots .$$

Since

$$\tag{4 } U _ {n} ^ \lambda ( f , t ) = \ \frac{1} \pi \int\limits _ { 0 } ^ { {2 } \pi } \left [ \frac{1}{2} + \sum _ { k=1 } ^ { n } \lambda _ {k} ^ {(n)} \cos k ( t-u ) \right ] f (u) du ,$$

the averages (3) belong to the very wide class of linear approximation methods which can be represented in the form of a convolution of $f$ with some (singular) kernel whose properties (in this case, the properties of the triangular matrix of numbers $\{ \lambda _ {k} ^ {(n)} \}$) determine the answer to the problem of convergence. If

$$\lim\limits _ {n\rightarrow \infty } \ \lambda _ {k} ^ {(n)} = 1,\ \ k = 1, 2 \dots$$

then the sums (3) converge uniformly when $f$ is a trigonometric polynomial, and so 2) is satisfied. For the boundedness of the norm of $U _ {n} ^ \lambda$ as an operator from $\widetilde{C}$ into $\widetilde{C}$, it is necessary that

$$\frac{\lambda _ {1} ^ {(n)} }{n} + \dots + \frac{\lambda _ {n} ^ {(n)} }{1} = \ O (1)$$

and $\lambda _ {k} ^ {(n)} = O (1)$, uniformly in $n$ and $k = 1 \dots n$. These conditions are sufficient as well for the boundedness of the sequence $\{ \| U _ {n} ^ \lambda \| \}$ if the matrix $\{ \lambda _ {k} ^ {(n)} \}$ is subject to some additional restrictions (for instance, convexity or concavity of the columns). Similar to (3), using the matrix of coefficients $\{ \lambda _ {k} ^ {(n)} \}$ one can also construct averages by means of Fourier–Chebyshev sums (for $f \in C [ -1, 1 ]$) and also by means of the Lagrange interpolation polynomials with nodes $2k \pi / ( 2n + 1 )$( in the periodic case) or with nodes $\cos [ ( 2k -1 ) \pi / ( 2n + 2 ) ]$ on $[ -1 , 1 ]$( see, for instance, [6]).

The question of convergence of positive linear operators $U _ {n} ^ {+}$ acting from $C [ a , b ]$ into $A _ {n}$ or from $\widetilde{C}$ into $T _ {n}$( in particular, of operators of the form (4) with a positive kernel) is solved in terms of three test functions (see [1]): For the uniform convergence of the sequence $U _ {n} ^ {+} ( f , t )$ to $f (t) \in C [ a , b ]$ or to $f (t) \in \widetilde{C}$ it is necessary and sufficient that this occurs for the functions $1$, $t$ and $t ^ {2}$, or, respectively, for the functions $1$, $\sin t$, $\cos t$.

The investigation of the error of approximation by a linear approximation method leads, mostly, to studies on the rate of convergence of $U _ {N} ( f , t )$ to $f (t)$, to estimates of the error in terms of the difference-differential properties of the approximated function, and to answering the question of how the linear approximation method behaves when the smoothness properties (of the approximated function) are improved.

In the examination of approximating properties of the linear approximation method (1), a natural role is played by the best approximation $E ( f, \mathfrak N _ {N} )$ of $f$ by elements of the subspace $\mathfrak N _ {N}$. The Lebesgue inequality

$$\tag{5 } \| f - S _ {n} (f) \| _ {\widetilde{C} } \leq \ E ( f , T _ {n} ) _ {\widetilde{C} } ( \mathop{\rm ln} n + 3 )$$

shows, in connection with the Jackson inequality

$$E ( f , T _ {n} ) _ {\widetilde{C} } \leq \ M n \omega \left ( f ^ { (r) } , { \frac{1}{n} } \right )$$

(where $\omega ( g , \delta )$ is the modulus of continuity of a function $g \in \widetilde{C}$), that, although the order of approximation by Fourier sums is slightly worse than the best approximation ( $\mathop{\rm ln} n$ in (5) cannot be replaced by a constant), these sums are affected by any increase of the order of differentiability of the function to be approximated. For some linear approximation methods the order of approximation cannot be higher than a certain quantity, no matter how many derivatives the function $f$ may have (the phenomenon of saturation). Thus, the order of approximation by positive linear polynomial operators $U _ {n} ^ {+}$ cannot be higher then $O ( n ^ {-2} )$; for Fejér sums the saturation order is $O ( n ^ {-1} )$; and for Bernstein–Rogosinski sums it is $O ( n ^ {-2} )$. Interpolation splines with a specific choice of the knots (nodes, joints) ensure the best order of approximation not only for the function itself, but also for some of its derivatives, depending on the degree of the polynomials out of which the spline is composed (see [7], [8]).

In some particular cases, for specific linear approximation methods exact or asymptotically-exact estimates for the errors of approximation were obtained for some classes of functions. Strictly speaking, the first non-trivial result of this kind was established by A.N. Kolmogorov, who proved in 1935 that

$$\sup _ {f \in W ^ { r } K } \ \| f - S _ {n} (f) \| _ {\widetilde{C} } = \ \frac{4 K }{\pi ^ {3} } \frac{ \mathop{\rm ln} n }{n ^ {r} } + O ( n ^ {-r} ) ,$$

where $W ^ { r } K$, $r = 1 , 2 \dots$ is the class of functions $f \in \widetilde{C}$ for which $f ^ { ( r - 1 ) } (t)$ is absolutely continuous on $[ 0 , 2 \pi ]$, and for which almost everywhere $| f ^ { (r) } (t) | \leq K$. Subsequently, results of similar character were obtained for Fourier sums (and some of their averages) with respect to other important classes of functions, for instance, in terms of upper bounds on the modulus of continuity of the $r$- th derivative (see [2]), [6] and [9]). Special interest is given to linear approximation methods (1) yielding, on some class of functions, the least upper bound of the best approximation by the subspace $\mathfrak N _ {N}$. Such a property for the classes $W ^ { r } K$, $r = 1 , 2 \dots$ with a specific choice of the $\lambda _ {k} ^ {(n)}$, applies to sums of the form (3). For instance, for $r = 1$ one should take

$$\lambda _ {k} ^ {(n)} = \ \frac{k \pi }{2 n + 2 } \ \mathop{\rm ctg} \ \frac{k \pi }{2 n + 2 } ;$$

the property also applies to interpolation splines of order $r - 1$ and defect 1 with knots $k \pi / n$, $k = 0 , 1 ,\dots$( see [4] and also Approximation of functions, extremal problems in function classes; Best linear method).

References

 [1] P.P. Korovkin, "Linear operators and approximation theory" , Hindushtan Publ. Comp. (1960) (Translated from Russian) [2] V.K. Dzyadyk, "Introduction to the theory of uniform approximation of functions by polynomials" , Moscow (1977) (In Russian) [3] V.M. Tikhomirov, "Some problems in approximation theory" , Moscow (1976) (In Russian) [4] N.P. Korneichuk, "Extremal problems in approximation theory" , Moscow (1976) (In Russian) [5] V.L. Goncharov, "The theory of interpolation and approximation of functions" , Moscow (1954) (In Russian) [6] A.F. Timan, "Theory of approximation of functions of a real variable" , Pergamon (1963) (Translated from Russian) [7] J.H. Ahlberg, E.N. Nilson, J.F. Walsh, "The theory of splines and their applications" , Acad. Press (1967) [8] S.B. Stechkin, Yu.N. Subbotin, "Splines in numerical mathematics" , Moscow (1976) (In Russian) [9] A.I. Stepanets, "Uniform approximation by trigonometric polynomials. Linear methods" , Kiev (1981) (In Russian) [10] N.P. Korneichuk, "Splines in approximation theory" , Moscow (1976) (In Russian) [11] J.R. Rice, "The approximation of functions" , 1. Linear theory , Addison-Wesley (1964) [12] L.L. Schumaker, "Spline functions, basic theory" , Wiley (1981) [13] P.J. Davis, "Interpolation and approximation" , Dover, reprint (1963)

The fact that three test functions are enough for uniform convergence of a sequence of positive linear operators from $C [ a , b ]$ into $A _ {n}$( or from $\widetilde{C}$ into $T _ {n}$) can be found in, e.g., [a1], Chapt. 3, Sect. 3. The theorem is a generalization of the Bernstein theorem, and was obtained by H. Bohman [a2]. For integral operators it was obtained by P.P. Korovkin (see also [1]).