Namespaces
Variants
Actions

Difference between revisions of "Approximation of functions, linear methods"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
Approximation methods defined by linear operators. If in a linear normed function space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a0129801.png" /> a linear manifold (linear subspace) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a0129802.png" /> is chosen as approximating set, then any linear operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a0129803.png" /> which transforms a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a0129804.png" /> into a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a0129805.png" /> so that
+
<!--
 +
a0129801.png
 +
$#A+1 = 169 n = 0
 +
$#C+1 = 169 : ~/encyclopedia/old_files/data/A012/A.0102980 Approximation of functions, linear methods
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a0129806.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
(where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a0129807.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a0129808.png" /> are arbitrary numbers), defines a linear approximation method for functions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a0129809.png" /> by means of functions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298010.png" />. A linear approximation method is called projective if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298011.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298012.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298013.png" />; it is called positive if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298014.png" /> for non-negative functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298015.png" />.
+
Approximation methods defined by linear operators. If in a linear normed function space  $  X $
 +
a linear manifold (linear subspace) $  \mathfrak N $
 +
is chosen as approximating set, then any linear operator  $  U $
 +
which transforms a function  $  f \in X $
 +
into a function  $  U ( f , t ) = ( U f ) (t) \in \mathfrak N $
 +
so that
  
The finite-dimensional case is the most interesting one. Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298016.png" /> is an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298017.png" />-dimensional subspace, and then
+
$$
 +
U ( \alpha _ {1} f _ {1} +
 +
\alpha _ {2} f _ {2} , t )  = \
 +
\alpha _ {1} U ( f _ {1} , t ) +
 +
\alpha _ {2} U ( f _ {2} , t )
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298018.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
(where  $  \alpha _ {1} $
 +
and  $  \alpha _ {2} $
 +
are arbitrary numbers), defines a linear approximation method for functions in  $  X $
 +
by means of functions in  $  \mathfrak N $.  
 +
A linear approximation method is called projective if  $  U ( f , t ) = f (t) $
 +
for all  $  f $
 +
in  $  \mathfrak N $;  
 +
it is called positive if  $  U ( f , t ) \geq  0 $
 +
for non-negative functions  $  f $.
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298019.png" /> is a basis of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298020.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298021.png" /> are certain linear functionals defined on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298022.png" />. The choice of the linearly independent system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298023.png" /> and the set of functionals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298024.png" /> is determined by the information about the function that one intends to use while constructing the linear method. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298025.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298026.png" /> is a fixed system of points in the domain of definition of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298027.png" />, and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298028.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298029.png" />, while <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298030.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298031.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298032.png" />, and one has an interpolation method (for instance, Lagrange's interpolation polynomial, or an [[Interpolation spline|interpolation spline]]). If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298033.png" /> is a Hilbert space of functions and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298034.png" /> are the Fourier coefficients of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298035.png" /> with respect to an orthonormal system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298036.png" />, then the sum at the right-hand side of (1) yields a linear method of orthogonal projection of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298037.png" /> onto <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298038.png" />; in this case
+
The finite-dimensional case is the most interesting one. Here  $  \mathfrak N = \mathfrak N _ {N} $
 +
is an $  N $-
 +
dimensional subspace, and then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298039.png" /></td> </tr></table>
+
$$ \tag{1 }
 +
U ( f , t )  = \
 +
U _ {N} ( f , t )  = \
 +
\sum _ { k=1 } ^ { N }
 +
c _ {k} (f) \phi _ {k} (t) ,
 +
$$
  
Thus, this method yields the best approximation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298040.png" /> by linear combinations of the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298041.png" />.
+
where  $  \{ \phi _ {k} (t) \} _ {1}  ^ {N} $
 +
is a basis of  $  \mathfrak N _ {N} $
 +
and  $  c _ {k} $
 +
are certain linear functionals defined on  $  X $.
 +
The choice of the linearly independent system  $  \{ \phi _ {k} (t) \} _ {1}  ^ {N} $
 +
and the set of functionals  $  \{ c _ {k} \} _ {1}  ^ {N} $
 +
is determined by the information about the function that one intends to use while constructing the linear method. If  $  c _ {k} (f) = f ( t _ {k} ) $,  
 +
where  $  \{ t _ {k} \} _ {1}  ^ {N} $
 +
is a fixed system of points in the domain of definition of $  f $,
 +
and if  $  \phi _ {k} ( t _ {i} ) = 0 $
 +
when  $  i \neq k $,
 +
while  $  \phi _ {k} ( t _ {k} ) = 1 $,
 +
then  $  U _ {N} ( f , t _ {k} ) = f ( t _ {k} ) $,
 +
$  k = 1 \dots N $,
 +
and one has an interpolation method (for instance, Lagrange's interpolation polynomial, or an [[Interpolation spline|interpolation spline]]). If  $  X = H $
 +
is a Hilbert space of functions and  $  c _ {k} (f) $
 +
are the Fourier coefficients of a function  $  f $
 +
with respect to an orthonormal system  $  \{ \phi _ {k} (t) \} $,
 +
then the sum at the right-hand side of (1) yields a linear method of orthogonal projection of  $  X $
 +
onto  $  \mathfrak N _ {N} $;
 +
in this case
  
In the theory of linear approximation methods, special attention is given to problems of convergence. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298042.png" /> be a Banach space, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298043.png" /> be a linearly independent system of functions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298044.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298045.png" /> be the subspace generated by the first <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298046.png" /> functions of this system, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298047.png" /> and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298048.png" /> be bounded linear operators from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298049.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298050.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298051.png" />. The convergence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298052.png" /> (in the sense that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298053.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298054.png" />) for any function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298055.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298056.png" /> holds if and only if: 1) the sequence of norms <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298057.png" /> of the operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298058.png" /> is bounded (see [[Banach–Steinhaus theorem|Banach–Steinhaus theorem]]); and 2) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298059.png" /> for all functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298060.png" /> in a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298061.png" /> that is everywhere dense in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298062.png" />. These conditions are satisfied, in particular, in the space of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298063.png" />-periodic functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298064.png" />, with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298065.png" />, and the operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298066.png" /> defined by the trigonometric Fourier sums
+
$$
 +
\| f - U _ {N} (f) \| _ {H}  = \
 +
\inf _
 +
{\alpha _ {k} } \
 +
\left \|
 +
f - \sum _ { k=1 } ^ { N }
 +
\alpha _ {k} \phi _ {k} \
 +
\right \| _ {H} .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298067.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
Thus, this method yields the best approximation of  $  f $
 +
by linear combinations of the functions  $  \phi _ {k} $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298068.png" /></td> </tr></table>
+
In the theory of linear approximation methods, special attention is given to problems of convergence. Let  $  X $
 +
be a Banach space, let  $  \{ \phi _ {1} (t) , \phi _ {2} (t) ,\dots \} $
 +
be a linearly independent system of functions on  $  X $,
 +
let  $  \mathfrak N _ {N} $
 +
be the subspace generated by the first  $  N $
 +
functions of this system,  $  N = 1 , 2 \dots $
 +
and let  $  U _ {N} $
 +
be bounded linear operators from  $  X $
 +
into  $  \mathfrak N _ {N} $,
 +
$  N = 1 , 2 ,\dots $.
 +
The convergence  $  U _ {N} ( f , t ) \rightarrow f (t) $(
 +
in the sense that  $  \| U _ {N} - f \| _ {X} \rightarrow 0 $
 +
as  $  N \rightarrow \infty $)
 +
for any function  $  f $
 +
in  $  X $
 +
holds if and only if: 1) the sequence of norms  $  \| U _ {N} \| $
 +
of the operators  $  U _ {N} $
 +
is bounded (see [[Banach–Steinhaus theorem|Banach–Steinhaus theorem]]); and 2)  $  U _ {N} ( f , t ) \rightarrow f(t) $
 +
for all functions  $  f $
 +
in a set  $  A $
 +
that is everywhere dense in  $  X $.
 +
These conditions are satisfied, in particular, in the space of  $  2 \pi $-
 +
periodic functions  $  \widetilde{L}  _ {p} = \widetilde{L}  _ {p} [ 0 , 2 \pi ] $,
 +
with  $  1 < p < \infty $,
 +
and the operators  $  S _ {n} $
 +
defined by the trigonometric Fourier sums
  
of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298069.png" />. As the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298070.png" /> on which 2) is satisfied one can take the set of all trigonometric polynomials. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298071.png" /> is the space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298072.png" /> (of continuous functions on the whole real axis with period <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298073.png" />) or if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298074.png" /> equals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298075.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298076.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298077.png" /> (see [[Lebesgue constants|Lebesgue constants]]). Consequently, there are in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298078.png" /> and in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298079.png" /> functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298080.png" /> to which the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298081.png" /> does not converge in the appropriate metric. Since in the Banach space of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298082.png" /> with a norm that is invariant with respect to translations the linear operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298083.png" /> projecting <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298084.png" /> onto the subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298085.png" /> of trigonometric polynomials of degree at most <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298086.png" /> satisfies the inequality <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298087.png" /> (see [[#References|[3]]]), the divergence in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298088.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298089.png" /> also holds for the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298090.png" />. In particular, this is true in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298091.png" /> for the sequence of Lagrange interpolation operators with any triangular matrix of interpolation nodes. Similar facts of a negative character also hold in the non-periodic case for the linear projection operators of the spaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298092.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298093.png" /> onto the subspaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298094.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298095.png" /> of algebraic polynomials of degree at most <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298096.png" />.
+
$$ \tag{2 }
 +
S _ {n} ( f , t ) = \
  
The divergence of the sequence (2) for some functions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298097.png" /> made it necessary to introduce and examine various averages of Fourier sums that do not have this disadvantage. Such are, for instance, the [[Fejér summation method|Fejér summation method]] and the [[Bernstein–Rogosinski summation method|Bernstein–Rogosinski summation method]], which are particular cases (with a specific choice of the numerical coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298098.png" />) of polynomials of the form
+
\frac{a _ {0} }{2}
 +
+
 +
\sum _ { k=1 } ^ { n }
 +
( a _ {k}  \cos  k t +
 +
b _ {k}  \sin  k t ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a01298099.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$
 +
= 1 , 2 \dots
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980100.png" /></td> </tr></table>
+
of  $  f $.
 +
As the set  $  A $
 +
on which 2) is satisfied one can take the set of all trigonometric polynomials. If  $  X $
 +
is the space  $  \widetilde{C}  = \widetilde{C}  [ 0 , 2 \pi ] $(
 +
of continuous functions on the whole real axis with period  $  2 \pi $)
 +
or if  $  X $
 +
equals  $  \widetilde{L}  _ {1} $,
 +
then  $  \| S _ {n} \|  \sim  \mathop{\rm ln}  n $
 +
as  $  n \rightarrow \infty $(
 +
see [[Lebesgue constants|Lebesgue constants]]). Consequently, there are in  $  \widetilde{C}  $
 +
and in  $  \widetilde{L}  _ {1} $
 +
functions  $  f $
 +
to which the sequence  $  \{ S _ {n} ( f , t ) \} $
 +
does not converge in the appropriate metric. Since in the Banach space of functions  $  X $
 +
with a norm that is invariant with respect to translations the linear operator  $  U _ {n}  ^  \perp  $
 +
projecting  $  X $
 +
onto the subspace  $  T _ {n} $
 +
of trigonometric polynomials of degree at most  $  n $
 +
satisfies the inequality  $  \| U _ {n}  ^  \perp  \| \geq  \| S _ {n} \| $(
 +
see [[#References|[3]]]), the divergence in  $  \widetilde{C}  $
 +
and  $  \widetilde{L}  _ {1} $
 +
also holds for the sequence  $  \{ U _ {n}  ^  \perp  \} $.
 +
In particular, this is true in  $  \widetilde{C}  $
 +
for the sequence of Lagrange interpolation operators with any triangular matrix of interpolation nodes. Similar facts of a negative character also hold in the non-periodic case for the linear projection operators of the spaces  $  C [ a , b ] $
 +
and  $  L _ {1} [ a , b ] $
 +
onto the subspaces  $  A _ {n} $,
 +
$  n = 1 , 2 \dots $
 +
of algebraic polynomials of degree at most  $  n $.
 +
 
 +
The divergence of the sequence (2) for some functions in  $  \widetilde{C}  $
 +
made it necessary to introduce and examine various averages of Fourier sums that do not have this disadvantage. Such are, for instance, the [[Fejér summation method|Fejér summation method]] and the [[Bernstein–Rogosinski summation method|Bernstein–Rogosinski summation method]], which are particular cases (with a specific choice of the numerical coefficients  $  \lambda _ {k}  ^ {(n)} $)
 +
of polynomials of the form
 +
 
 +
$$ \tag{3 }
 +
U _ {n}  ^  \lambda  ( f , t )  = \
 +
 
 +
\frac{a _ {0} }{2}
 +
+
 +
\sum _ { k=1 } ^ { n }
 +
\lambda _ {k}  ^ {(n)}
 +
( a _ {k}  \cos  kt + b _ {k}  \sin  kt ) ,
 +
$$
 +
 
 +
$$
 +
n  =  1 , 2 ,\dots .
 +
$$
  
 
Since
 
Since
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980101.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
U _ {n}  ^  \lambda  ( f , t )  = \
 +
 
 +
\frac{1} \pi
 +
\int\limits _ { 0 } ^ { {2 }  \pi }
 +
\left [
 +
\frac{1}{2}
 +
+
 +
\sum _ { k=1 } ^ { n }
 +
\lambda _ {k}  ^ {(n)}  \cos  k ( t-u )
 +
\right ] f (u)  du ,
 +
$$
 +
 
 +
the averages (3) belong to the very wide class of linear approximation methods which can be represented in the form of a convolution of  $  f $
 +
with some (singular) kernel whose properties (in this case, the properties of the triangular matrix of numbers  $  \{ \lambda _ {k}  ^ {(n)} \} $)
 +
determine the answer to the problem of convergence. If
 +
 
 +
$$
 +
\lim\limits _ {n\rightarrow \infty } \
 +
\lambda _ {k}  ^ {(n)}  = 1,\ \
 +
k = 1, 2 \dots
 +
$$
  
the averages (3) belong to the very wide class of linear approximation methods which can be represented in the form of a convolution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980102.png" /> with some (singular) kernel whose properties (in this case, the properties of the triangular matrix of numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980103.png" />) determine the answer to the problem of convergence. If
+
then the sums (3) converge uniformly when  $  f $
 +
is a trigonometric polynomial, and so 2) is satisfied. For the boundedness of the norm of $  U _ {n}  ^  \lambda  $
 +
as an operator from  $  \widetilde{C}  $
 +
into  $  \widetilde{C}  $,
 +
it is necessary that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980104.png" /></td> </tr></table>
+
$$
  
then the sums (3) converge uniformly when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980105.png" /> is a trigonometric polynomial, and so 2) is satisfied. For the boundedness of the norm of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980106.png" /> as an operator from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980107.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980108.png" />, it is necessary that
+
\frac{\lambda _ {1}  ^ {(n)} }{n}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980109.png" /></td> </tr></table>
+
+ \dots +
 +
\frac{\lambda _ {n}  ^ {(n)} }{1}
 +
  = \
 +
O (1)
 +
$$
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980110.png" />, uniformly in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980111.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980112.png" />. These conditions are sufficient as well for the boundedness of the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980113.png" /> if the matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980114.png" /> is subject to some additional restrictions (for instance, convexity or concavity of the columns). Similar to (3), using the matrix of coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980115.png" /> one can also construct averages by means of Fourier–Chebyshev sums (for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980116.png" />) and also by means of the Lagrange interpolation polynomials with nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980117.png" /> (in the periodic case) or with nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980118.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980119.png" /> (see, for instance, [[#References|[6]]]).
+
and $  \lambda _ {k}  ^ {(n)} = O (1) $,  
 +
uniformly in $  n $
 +
and $  k = 1 \dots n $.  
 +
These conditions are sufficient as well for the boundedness of the sequence $  \{ \| U _ {n}  ^  \lambda  \| \} $
 +
if the matrix $  \{ \lambda _ {k}  ^ {(n)} \} $
 +
is subject to some additional restrictions (for instance, convexity or concavity of the columns). Similar to (3), using the matrix of coefficients $  \{ \lambda _ {k}  ^ {(n)} \} $
 +
one can also construct averages by means of Fourier–Chebyshev sums (for $  f \in C [ -1, 1 ] $)  
 +
and also by means of the Lagrange interpolation polynomials with nodes $  2k \pi / ( 2n + 1 ) $(
 +
in the periodic case) or with nodes $  \cos [ ( 2k -1 ) \pi / ( 2n + 2 ) ] $
 +
on $  [ -1 , 1 ] $(
 +
see, for instance, [[#References|[6]]]).
  
The question of convergence of positive linear operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980120.png" /> acting from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980121.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980122.png" /> or from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980123.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980124.png" /> (in particular, of operators of the form (4) with a positive kernel) is solved in terms of three test functions (see [[#References|[1]]]): For the uniform convergence of the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980125.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980126.png" /> or to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980127.png" /> it is necessary and sufficient that this occurs for the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980128.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980129.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980130.png" />, or, respectively, for the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980131.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980132.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980133.png" />.
+
The question of convergence of positive linear operators $  U _ {n}  ^ {+} $
 +
acting from $  C [ a , b ] $
 +
into $  A _ {n} $
 +
or from $  \widetilde{C}  $
 +
into $  T _ {n} $(
 +
in particular, of operators of the form (4) with a positive kernel) is solved in terms of three test functions (see [[#References|[1]]]): For the uniform convergence of the sequence $  U _ {n}  ^ {+} ( f , t ) $
 +
to $  f (t) \in C [ a , b ] $
 +
or to $  f (t) \in \widetilde{C}  $
 +
it is necessary and sufficient that this occurs for the functions $  1 $,  
 +
$  t $
 +
and $  t  ^ {2} $,  
 +
or, respectively, for the functions $  1 $,  
 +
$  \sin  t $,  
 +
$  \cos  t $.
  
The investigation of the error of approximation by a linear approximation method leads, mostly, to studies on the rate of convergence of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980134.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980135.png" />, to estimates of the error in terms of the difference-differential properties of the approximated function, and to answering the question of how the linear approximation method behaves when the smoothness properties (of the approximated function) are improved.
+
The investigation of the error of approximation by a linear approximation method leads, mostly, to studies on the rate of convergence of $  U _ {N} ( f , t ) $
 +
to $  f (t) $,  
 +
to estimates of the error in terms of the difference-differential properties of the approximated function, and to answering the question of how the linear approximation method behaves when the smoothness properties (of the approximated function) are improved.
  
In the examination of approximating properties of the linear approximation method (1), a natural role is played by the best approximation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980136.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980137.png" /> by elements of the subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980138.png" />. The [[Lebesgue inequality|Lebesgue inequality]]
+
In the examination of approximating properties of the linear approximation method (1), a natural role is played by the best approximation $  E ( f, \mathfrak N _ {N} ) $
 +
of $  f $
 +
by elements of the subspace $  \mathfrak N _ {N} $.  
 +
The [[Lebesgue inequality|Lebesgue inequality]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980139.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
\| f - S _ {n} (f) \| _ {\widetilde{C}  }  \leq  \
 +
E ( f , T _ {n} ) _ {\widetilde{C}  }
 +
( \mathop{\rm ln}  n + 3 )
 +
$$
  
 
shows, in connection with the [[Jackson inequality|Jackson inequality]]
 
shows, in connection with the [[Jackson inequality|Jackson inequality]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980140.png" /></td> </tr></table>
+
$$
 +
E ( f , T _ {n} ) _ {\widetilde{C}  }  \leq  \
 +
M n \omega \left ( f ^ { (r) } , {
 +
\frac{1}{n}
 +
} \right )
 +
$$
  
(where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980141.png" /> is the modulus of continuity of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980142.png" />), that, although the order of approximation by Fourier sums is slightly worse than the best approximation (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980143.png" /> in (5) cannot be replaced by a constant), these sums are affected by any increase of the order of differentiability of the function to be approximated. For some linear approximation methods the order of approximation cannot be higher than a certain quantity, no matter how many derivatives the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980144.png" /> may have (the phenomenon of saturation). Thus, the order of approximation by positive linear polynomial operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980145.png" /> cannot be higher then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980146.png" />; for Fejér sums the saturation order is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980147.png" />; and for Bernstein–Rogosinski sums it is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980148.png" />. Interpolation splines with a specific choice of the knots (nodes, joints) ensure the best order of approximation not only for the function itself, but also for some of its derivatives, depending on the degree of the polynomials out of which the spline is composed (see [[#References|[7]]], [[#References|[8]]]).
+
(where $  \omega ( g , \delta ) $
 +
is the modulus of continuity of a function $  g \in \widetilde{C}  $),  
 +
that, although the order of approximation by Fourier sums is slightly worse than the best approximation ( $  \mathop{\rm ln}  n $
 +
in (5) cannot be replaced by a constant), these sums are affected by any increase of the order of differentiability of the function to be approximated. For some linear approximation methods the order of approximation cannot be higher than a certain quantity, no matter how many derivatives the function $  f $
 +
may have (the phenomenon of saturation). Thus, the order of approximation by positive linear polynomial operators $  U _ {n}  ^ {+} $
 +
cannot be higher then $  O ( n  ^ {-2} ) $;  
 +
for Fejér sums the saturation order is $  O ( n  ^ {-1} ) $;  
 +
and for Bernstein–Rogosinski sums it is $  O ( n  ^ {-2} ) $.  
 +
Interpolation splines with a specific choice of the knots (nodes, joints) ensure the best order of approximation not only for the function itself, but also for some of its derivatives, depending on the degree of the polynomials out of which the spline is composed (see [[#References|[7]]], [[#References|[8]]]).
  
 
In some particular cases, for specific linear approximation methods exact or asymptotically-exact estimates for the errors of approximation were obtained for some classes of functions. Strictly speaking, the first non-trivial result of this kind was established by A.N. Kolmogorov, who proved in 1935 that
 
In some particular cases, for specific linear approximation methods exact or asymptotically-exact estimates for the errors of approximation were obtained for some classes of functions. Strictly speaking, the first non-trivial result of this kind was established by A.N. Kolmogorov, who proved in 1935 that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980149.png" /></td> </tr></table>
+
$$
 +
\sup _
 +
{f \in W ^ { r } K } \
 +
\| f - S _ {n} (f) \| _ {\widetilde{C}  }  = \
 +
 
 +
\frac{4 K }{\pi  ^ {3} }
 +
 
 +
\frac{ \mathop{\rm ln}  n }{n  ^ {r} }
 +
 
 +
+ O ( n  ^ {-r} ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980150.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980151.png" /> is the class of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980152.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980153.png" /> is absolutely continuous on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980154.png" />, and for which almost everywhere <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980155.png" />. Subsequently, results of similar character were obtained for Fourier sums (and some of their averages) with respect to other important classes of functions, for instance, in terms of upper bounds on the modulus of continuity of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980156.png" />-th derivative (see [[#References|[2]]]), [[#References|[6]]] and [[#References|[9]]]). Special interest is given to linear approximation methods (1) yielding, on some class of functions, the least upper bound of the best approximation by the subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980157.png" />. Such a property for the classes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980158.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980159.png" /> with a specific choice of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980160.png" />, applies to sums of the form (3). For instance, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980161.png" /> one should take
+
where $  W ^ { r } K $,
 +
$  r = 1 , 2 \dots $
 +
is the class of functions $  f \in \widetilde{C}  $
 +
for which $  f ^ { ( r - 1 ) } (t) $
 +
is absolutely continuous on $  [ 0 , 2 \pi ] $,
 +
and for which almost everywhere $  | f ^ { (r) } (t) | \leq  K $.  
 +
Subsequently, results of similar character were obtained for Fourier sums (and some of their averages) with respect to other important classes of functions, for instance, in terms of upper bounds on the modulus of continuity of the $  r $-
 +
th derivative (see [[#References|[2]]]), [[#References|[6]]] and [[#References|[9]]]). Special interest is given to linear approximation methods (1) yielding, on some class of functions, the least upper bound of the best approximation by the subspace $  \mathfrak N _ {N} $.  
 +
Such a property for the classes $  W ^ { r } K $,
 +
$  r = 1 , 2 \dots $
 +
with a specific choice of the $  \lambda _ {k}  ^ {(n)} $,  
 +
applies to sums of the form (3). For instance, for $  r = 1 $
 +
one should take
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980162.png" /></td> </tr></table>
+
$$
 +
\lambda _ {k}  ^ {(n)}  = \
  
the property also applies to interpolation splines of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980163.png" /> and defect 1 with knots <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980164.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980165.png" /> (see [[#References|[4]]] and also [[Approximation of functions, extremal problems in function classes|Approximation of functions, extremal problems in function classes]]; [[Best linear method|Best linear method]]).
+
\frac{k \pi }{2 n + 2 }
 +
\
 +
\mathop{\rm ctg} \
 +
 
 +
\frac{k \pi }{2 n + 2 }
 +
;
 +
$$
 +
 
 +
the property also applies to interpolation splines of order $  r - 1 $
 +
and defect 1 with knots $  k \pi / n $,  
 +
$  k = 0 , 1 ,\dots $(
 +
see [[#References|[4]]] and also [[Approximation of functions, extremal problems in function classes|Approximation of functions, extremal problems in function classes]]; [[Best linear method|Best linear method]]).
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  P.P. Korovkin,  "Linear operators and approximation theory" , Hindushtan Publ. Comp.  (1960)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.K. Dzyadyk,  "Introduction to the theory of uniform approximation of functions by polynomials" , Moscow  (1977)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.M. Tikhomirov,  "Some problems in approximation theory" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  N.P. Korneichuk,  "Extremal problems in approximation theory" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  V.L. Goncharov,  "The theory of interpolation and approximation of functions" , Moscow  (1954)  (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  A.F. Timan,  "Theory of approximation of functions of a real variable" , Pergamon  (1963)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  J.H. Ahlberg,  E.N. Nilson,  J.F. Walsh,  "The theory of splines and their applications" , Acad. Press  (1967)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  S.B. Stechkin,  Yu.N. Subbotin,  "Splines in numerical mathematics" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  A.I. Stepanets,  "Uniform approximation by trigonometric polynomials. Linear methods" , Kiev  (1981)  (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  N.P. Korneichuk,  "Splines in approximation theory" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  J.R. Rice,  "The approximation of functions" , '''1. Linear theory''' , Addison-Wesley  (1964)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  L.L. Schumaker,  "Spline functions, basic theory" , Wiley  (1981)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  P.J. Davis,  "Interpolation and approximation" , Dover, reprint  (1963)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  P.P. Korovkin,  "Linear operators and approximation theory" , Hindushtan Publ. Comp.  (1960)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.K. Dzyadyk,  "Introduction to the theory of uniform approximation of functions by polynomials" , Moscow  (1977)  (In Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.M. Tikhomirov,  "Some problems in approximation theory" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  N.P. Korneichuk,  "Extremal problems in approximation theory" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  V.L. Goncharov,  "The theory of interpolation and approximation of functions" , Moscow  (1954)  (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  A.F. Timan,  "Theory of approximation of functions of a real variable" , Pergamon  (1963)  (Translated from Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  J.H. Ahlberg,  E.N. Nilson,  J.F. Walsh,  "The theory of splines and their applications" , Acad. Press  (1967)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top">  S.B. Stechkin,  Yu.N. Subbotin,  "Splines in numerical mathematics" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top">  A.I. Stepanets,  "Uniform approximation by trigonometric polynomials. Linear methods" , Kiev  (1981)  (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top">  N.P. Korneichuk,  "Splines in approximation theory" , Moscow  (1976)  (In Russian)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top">  J.R. Rice,  "The approximation of functions" , '''1. Linear theory''' , Addison-Wesley  (1964)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top">  L.L. Schumaker,  "Spline functions, basic theory" , Wiley  (1981)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top">  P.J. Davis,  "Interpolation and approximation" , Dover, reprint  (1963)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
The fact that three test functions are enough for uniform convergence of a sequence of positive linear operators from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980166.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980167.png" /> (or from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980168.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a012/a012980/a012980169.png" />) can be found in, e.g., [[#References|[a1]]], Chapt. 3, Sect. 3. The theorem is a generalization of the [[Bernstein theorem|Bernstein theorem]], and was obtained by H. Bohman [[#References|[a2]]]. For integral operators it was obtained by P.P. Korovkin (see also [[#References|[1]]]).
+
The fact that three test functions are enough for uniform convergence of a sequence of positive linear operators from $  C [ a , b ] $
 +
into $  A _ {n} $(
 +
or from $  \widetilde{C}  $
 +
into $  T _ {n} $)  
 +
can be found in, e.g., [[#References|[a1]]], Chapt. 3, Sect. 3. The theorem is a generalization of the [[Bernstein theorem|Bernstein theorem]], and was obtained by H. Bohman [[#References|[a2]]]. For integral operators it was obtained by P.P. Korovkin (see also [[#References|[1]]]).
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E.W. Cheney,  "Introduction to approximation theory" , Chelsea, reprint  (1982)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  H. Bohman,  "On approximation of continuous and of analytic functions"  ''Arkiv Mat.'' , '''2'''  (1952)  pp. 43–56</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  H. Kiesewetter,  "Vorlesungen über lineare Approximation" , Deutsch. Verlag Wissenschaft.  (1973)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  R.A. DeVore,  "The approximation of continuous functions by positive linear operators" , Springer  (1972)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  E.W. Cheney,  G.R. Hobby,  P.D. Morris,  F. Schurer,  D.E. Wubert,  "On the minimal property of the Fourier projection"  ''Trans. Amer. Math. Soc.'' , '''143'''  (1969)  pp. 249–258</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E.W. Cheney,  "Introduction to approximation theory" , Chelsea, reprint  (1982)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  H. Bohman,  "On approximation of continuous and of analytic functions"  ''Arkiv Mat.'' , '''2'''  (1952)  pp. 43–56</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  H. Kiesewetter,  "Vorlesungen über lineare Approximation" , Deutsch. Verlag Wissenschaft.  (1973)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  R.A. DeVore,  "The approximation of continuous functions by positive linear operators" , Springer  (1972)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  E.W. Cheney,  G.R. Hobby,  P.D. Morris,  F. Schurer,  D.E. Wubert,  "On the minimal property of the Fourier projection"  ''Trans. Amer. Math. Soc.'' , '''143'''  (1969)  pp. 249–258</TD></TR></table>

Latest revision as of 18:47, 5 April 2020


Approximation methods defined by linear operators. If in a linear normed function space $ X $ a linear manifold (linear subspace) $ \mathfrak N $ is chosen as approximating set, then any linear operator $ U $ which transforms a function $ f \in X $ into a function $ U ( f , t ) = ( U f ) (t) \in \mathfrak N $ so that

$$ U ( \alpha _ {1} f _ {1} + \alpha _ {2} f _ {2} , t ) = \ \alpha _ {1} U ( f _ {1} , t ) + \alpha _ {2} U ( f _ {2} , t ) $$

(where $ \alpha _ {1} $ and $ \alpha _ {2} $ are arbitrary numbers), defines a linear approximation method for functions in $ X $ by means of functions in $ \mathfrak N $. A linear approximation method is called projective if $ U ( f , t ) = f (t) $ for all $ f $ in $ \mathfrak N $; it is called positive if $ U ( f , t ) \geq 0 $ for non-negative functions $ f $.

The finite-dimensional case is the most interesting one. Here $ \mathfrak N = \mathfrak N _ {N} $ is an $ N $- dimensional subspace, and then

$$ \tag{1 } U ( f , t ) = \ U _ {N} ( f , t ) = \ \sum _ { k=1 } ^ { N } c _ {k} (f) \phi _ {k} (t) , $$

where $ \{ \phi _ {k} (t) \} _ {1} ^ {N} $ is a basis of $ \mathfrak N _ {N} $ and $ c _ {k} $ are certain linear functionals defined on $ X $. The choice of the linearly independent system $ \{ \phi _ {k} (t) \} _ {1} ^ {N} $ and the set of functionals $ \{ c _ {k} \} _ {1} ^ {N} $ is determined by the information about the function that one intends to use while constructing the linear method. If $ c _ {k} (f) = f ( t _ {k} ) $, where $ \{ t _ {k} \} _ {1} ^ {N} $ is a fixed system of points in the domain of definition of $ f $, and if $ \phi _ {k} ( t _ {i} ) = 0 $ when $ i \neq k $, while $ \phi _ {k} ( t _ {k} ) = 1 $, then $ U _ {N} ( f , t _ {k} ) = f ( t _ {k} ) $, $ k = 1 \dots N $, and one has an interpolation method (for instance, Lagrange's interpolation polynomial, or an interpolation spline). If $ X = H $ is a Hilbert space of functions and $ c _ {k} (f) $ are the Fourier coefficients of a function $ f $ with respect to an orthonormal system $ \{ \phi _ {k} (t) \} $, then the sum at the right-hand side of (1) yields a linear method of orthogonal projection of $ X $ onto $ \mathfrak N _ {N} $; in this case

$$ \| f - U _ {N} (f) \| _ {H} = \ \inf _ {\alpha _ {k} } \ \left \| f - \sum _ { k=1 } ^ { N } \alpha _ {k} \phi _ {k} \ \right \| _ {H} . $$

Thus, this method yields the best approximation of $ f $ by linear combinations of the functions $ \phi _ {k} $.

In the theory of linear approximation methods, special attention is given to problems of convergence. Let $ X $ be a Banach space, let $ \{ \phi _ {1} (t) , \phi _ {2} (t) ,\dots \} $ be a linearly independent system of functions on $ X $, let $ \mathfrak N _ {N} $ be the subspace generated by the first $ N $ functions of this system, $ N = 1 , 2 \dots $ and let $ U _ {N} $ be bounded linear operators from $ X $ into $ \mathfrak N _ {N} $, $ N = 1 , 2 ,\dots $. The convergence $ U _ {N} ( f , t ) \rightarrow f (t) $( in the sense that $ \| U _ {N} - f \| _ {X} \rightarrow 0 $ as $ N \rightarrow \infty $) for any function $ f $ in $ X $ holds if and only if: 1) the sequence of norms $ \| U _ {N} \| $ of the operators $ U _ {N} $ is bounded (see Banach–Steinhaus theorem); and 2) $ U _ {N} ( f , t ) \rightarrow f(t) $ for all functions $ f $ in a set $ A $ that is everywhere dense in $ X $. These conditions are satisfied, in particular, in the space of $ 2 \pi $- periodic functions $ \widetilde{L} _ {p} = \widetilde{L} _ {p} [ 0 , 2 \pi ] $, with $ 1 < p < \infty $, and the operators $ S _ {n} $ defined by the trigonometric Fourier sums

$$ \tag{2 } S _ {n} ( f , t ) = \ \frac{a _ {0} }{2} + \sum _ { k=1 } ^ { n } ( a _ {k} \cos k t + b _ {k} \sin k t ) , $$

$$ n = 1 , 2 \dots $$

of $ f $. As the set $ A $ on which 2) is satisfied one can take the set of all trigonometric polynomials. If $ X $ is the space $ \widetilde{C} = \widetilde{C} [ 0 , 2 \pi ] $( of continuous functions on the whole real axis with period $ 2 \pi $) or if $ X $ equals $ \widetilde{L} _ {1} $, then $ \| S _ {n} \| \sim \mathop{\rm ln} n $ as $ n \rightarrow \infty $( see Lebesgue constants). Consequently, there are in $ \widetilde{C} $ and in $ \widetilde{L} _ {1} $ functions $ f $ to which the sequence $ \{ S _ {n} ( f , t ) \} $ does not converge in the appropriate metric. Since in the Banach space of functions $ X $ with a norm that is invariant with respect to translations the linear operator $ U _ {n} ^ \perp $ projecting $ X $ onto the subspace $ T _ {n} $ of trigonometric polynomials of degree at most $ n $ satisfies the inequality $ \| U _ {n} ^ \perp \| \geq \| S _ {n} \| $( see [3]), the divergence in $ \widetilde{C} $ and $ \widetilde{L} _ {1} $ also holds for the sequence $ \{ U _ {n} ^ \perp \} $. In particular, this is true in $ \widetilde{C} $ for the sequence of Lagrange interpolation operators with any triangular matrix of interpolation nodes. Similar facts of a negative character also hold in the non-periodic case for the linear projection operators of the spaces $ C [ a , b ] $ and $ L _ {1} [ a , b ] $ onto the subspaces $ A _ {n} $, $ n = 1 , 2 \dots $ of algebraic polynomials of degree at most $ n $.

The divergence of the sequence (2) for some functions in $ \widetilde{C} $ made it necessary to introduce and examine various averages of Fourier sums that do not have this disadvantage. Such are, for instance, the Fejér summation method and the Bernstein–Rogosinski summation method, which are particular cases (with a specific choice of the numerical coefficients $ \lambda _ {k} ^ {(n)} $) of polynomials of the form

$$ \tag{3 } U _ {n} ^ \lambda ( f , t ) = \ \frac{a _ {0} }{2} + \sum _ { k=1 } ^ { n } \lambda _ {k} ^ {(n)} ( a _ {k} \cos kt + b _ {k} \sin kt ) , $$

$$ n = 1 , 2 ,\dots . $$

Since

$$ \tag{4 } U _ {n} ^ \lambda ( f , t ) = \ \frac{1} \pi \int\limits _ { 0 } ^ { {2 } \pi } \left [ \frac{1}{2} + \sum _ { k=1 } ^ { n } \lambda _ {k} ^ {(n)} \cos k ( t-u ) \right ] f (u) du , $$

the averages (3) belong to the very wide class of linear approximation methods which can be represented in the form of a convolution of $ f $ with some (singular) kernel whose properties (in this case, the properties of the triangular matrix of numbers $ \{ \lambda _ {k} ^ {(n)} \} $) determine the answer to the problem of convergence. If

$$ \lim\limits _ {n\rightarrow \infty } \ \lambda _ {k} ^ {(n)} = 1,\ \ k = 1, 2 \dots $$

then the sums (3) converge uniformly when $ f $ is a trigonometric polynomial, and so 2) is satisfied. For the boundedness of the norm of $ U _ {n} ^ \lambda $ as an operator from $ \widetilde{C} $ into $ \widetilde{C} $, it is necessary that

$$ \frac{\lambda _ {1} ^ {(n)} }{n} + \dots + \frac{\lambda _ {n} ^ {(n)} }{1} = \ O (1) $$

and $ \lambda _ {k} ^ {(n)} = O (1) $, uniformly in $ n $ and $ k = 1 \dots n $. These conditions are sufficient as well for the boundedness of the sequence $ \{ \| U _ {n} ^ \lambda \| \} $ if the matrix $ \{ \lambda _ {k} ^ {(n)} \} $ is subject to some additional restrictions (for instance, convexity or concavity of the columns). Similar to (3), using the matrix of coefficients $ \{ \lambda _ {k} ^ {(n)} \} $ one can also construct averages by means of Fourier–Chebyshev sums (for $ f \in C [ -1, 1 ] $) and also by means of the Lagrange interpolation polynomials with nodes $ 2k \pi / ( 2n + 1 ) $( in the periodic case) or with nodes $ \cos [ ( 2k -1 ) \pi / ( 2n + 2 ) ] $ on $ [ -1 , 1 ] $( see, for instance, [6]).

The question of convergence of positive linear operators $ U _ {n} ^ {+} $ acting from $ C [ a , b ] $ into $ A _ {n} $ or from $ \widetilde{C} $ into $ T _ {n} $( in particular, of operators of the form (4) with a positive kernel) is solved in terms of three test functions (see [1]): For the uniform convergence of the sequence $ U _ {n} ^ {+} ( f , t ) $ to $ f (t) \in C [ a , b ] $ or to $ f (t) \in \widetilde{C} $ it is necessary and sufficient that this occurs for the functions $ 1 $, $ t $ and $ t ^ {2} $, or, respectively, for the functions $ 1 $, $ \sin t $, $ \cos t $.

The investigation of the error of approximation by a linear approximation method leads, mostly, to studies on the rate of convergence of $ U _ {N} ( f , t ) $ to $ f (t) $, to estimates of the error in terms of the difference-differential properties of the approximated function, and to answering the question of how the linear approximation method behaves when the smoothness properties (of the approximated function) are improved.

In the examination of approximating properties of the linear approximation method (1), a natural role is played by the best approximation $ E ( f, \mathfrak N _ {N} ) $ of $ f $ by elements of the subspace $ \mathfrak N _ {N} $. The Lebesgue inequality

$$ \tag{5 } \| f - S _ {n} (f) \| _ {\widetilde{C} } \leq \ E ( f , T _ {n} ) _ {\widetilde{C} } ( \mathop{\rm ln} n + 3 ) $$

shows, in connection with the Jackson inequality

$$ E ( f , T _ {n} ) _ {\widetilde{C} } \leq \ M n \omega \left ( f ^ { (r) } , { \frac{1}{n} } \right ) $$

(where $ \omega ( g , \delta ) $ is the modulus of continuity of a function $ g \in \widetilde{C} $), that, although the order of approximation by Fourier sums is slightly worse than the best approximation ( $ \mathop{\rm ln} n $ in (5) cannot be replaced by a constant), these sums are affected by any increase of the order of differentiability of the function to be approximated. For some linear approximation methods the order of approximation cannot be higher than a certain quantity, no matter how many derivatives the function $ f $ may have (the phenomenon of saturation). Thus, the order of approximation by positive linear polynomial operators $ U _ {n} ^ {+} $ cannot be higher then $ O ( n ^ {-2} ) $; for Fejér sums the saturation order is $ O ( n ^ {-1} ) $; and for Bernstein–Rogosinski sums it is $ O ( n ^ {-2} ) $. Interpolation splines with a specific choice of the knots (nodes, joints) ensure the best order of approximation not only for the function itself, but also for some of its derivatives, depending on the degree of the polynomials out of which the spline is composed (see [7], [8]).

In some particular cases, for specific linear approximation methods exact or asymptotically-exact estimates for the errors of approximation were obtained for some classes of functions. Strictly speaking, the first non-trivial result of this kind was established by A.N. Kolmogorov, who proved in 1935 that

$$ \sup _ {f \in W ^ { r } K } \ \| f - S _ {n} (f) \| _ {\widetilde{C} } = \ \frac{4 K }{\pi ^ {3} } \frac{ \mathop{\rm ln} n }{n ^ {r} } + O ( n ^ {-r} ) , $$

where $ W ^ { r } K $, $ r = 1 , 2 \dots $ is the class of functions $ f \in \widetilde{C} $ for which $ f ^ { ( r - 1 ) } (t) $ is absolutely continuous on $ [ 0 , 2 \pi ] $, and for which almost everywhere $ | f ^ { (r) } (t) | \leq K $. Subsequently, results of similar character were obtained for Fourier sums (and some of their averages) with respect to other important classes of functions, for instance, in terms of upper bounds on the modulus of continuity of the $ r $- th derivative (see [2]), [6] and [9]). Special interest is given to linear approximation methods (1) yielding, on some class of functions, the least upper bound of the best approximation by the subspace $ \mathfrak N _ {N} $. Such a property for the classes $ W ^ { r } K $, $ r = 1 , 2 \dots $ with a specific choice of the $ \lambda _ {k} ^ {(n)} $, applies to sums of the form (3). For instance, for $ r = 1 $ one should take

$$ \lambda _ {k} ^ {(n)} = \ \frac{k \pi }{2 n + 2 } \ \mathop{\rm ctg} \ \frac{k \pi }{2 n + 2 } ; $$

the property also applies to interpolation splines of order $ r - 1 $ and defect 1 with knots $ k \pi / n $, $ k = 0 , 1 ,\dots $( see [4] and also Approximation of functions, extremal problems in function classes; Best linear method).

References

[1] P.P. Korovkin, "Linear operators and approximation theory" , Hindushtan Publ. Comp. (1960) (Translated from Russian)
[2] V.K. Dzyadyk, "Introduction to the theory of uniform approximation of functions by polynomials" , Moscow (1977) (In Russian)
[3] V.M. Tikhomirov, "Some problems in approximation theory" , Moscow (1976) (In Russian)
[4] N.P. Korneichuk, "Extremal problems in approximation theory" , Moscow (1976) (In Russian)
[5] V.L. Goncharov, "The theory of interpolation and approximation of functions" , Moscow (1954) (In Russian)
[6] A.F. Timan, "Theory of approximation of functions of a real variable" , Pergamon (1963) (Translated from Russian)
[7] J.H. Ahlberg, E.N. Nilson, J.F. Walsh, "The theory of splines and their applications" , Acad. Press (1967)
[8] S.B. Stechkin, Yu.N. Subbotin, "Splines in numerical mathematics" , Moscow (1976) (In Russian)
[9] A.I. Stepanets, "Uniform approximation by trigonometric polynomials. Linear methods" , Kiev (1981) (In Russian)
[10] N.P. Korneichuk, "Splines in approximation theory" , Moscow (1976) (In Russian)
[11] J.R. Rice, "The approximation of functions" , 1. Linear theory , Addison-Wesley (1964)
[12] L.L. Schumaker, "Spline functions, basic theory" , Wiley (1981)
[13] P.J. Davis, "Interpolation and approximation" , Dover, reprint (1963)

Comments

The fact that three test functions are enough for uniform convergence of a sequence of positive linear operators from $ C [ a , b ] $ into $ A _ {n} $( or from $ \widetilde{C} $ into $ T _ {n} $) can be found in, e.g., [a1], Chapt. 3, Sect. 3. The theorem is a generalization of the Bernstein theorem, and was obtained by H. Bohman [a2]. For integral operators it was obtained by P.P. Korovkin (see also [1]).

References

[a1] E.W. Cheney, "Introduction to approximation theory" , Chelsea, reprint (1982)
[a2] H. Bohman, "On approximation of continuous and of analytic functions" Arkiv Mat. , 2 (1952) pp. 43–56
[a3] H. Kiesewetter, "Vorlesungen über lineare Approximation" , Deutsch. Verlag Wissenschaft. (1973)
[a4] R.A. DeVore, "The approximation of continuous functions by positive linear operators" , Springer (1972)
[a5] E.W. Cheney, G.R. Hobby, P.D. Morris, F. Schurer, D.E. Wubert, "On the minimal property of the Fourier projection" Trans. Amer. Math. Soc. , 143 (1969) pp. 249–258
How to Cite This Entry:
Approximation of functions, linear methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Approximation_of_functions,_linear_methods&oldid=18726
This article was adapted from an original article by N.P. Korneichuk (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article