Namespaces
Variants
Actions

Difference between revisions of "Mean-square approximation of a function"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
An approximation of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m0631701.png" /> by a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m0631702.png" />, where the error measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m0631703.png" /> is defined by the formula
+
<!--
 +
m0631701.png
 +
$#A+1 = 15 n = 0
 +
$#C+1 = 15 : ~/encyclopedia/old_files/data/M063/M.0603170 Mean\AAhsquare approximation of a function
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m0631704.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m0631705.png" /> is a non-decreasing function on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m0631706.png" /> different from a constant.
+
An approximation of a function  $  f( t) $
 +
by a function  $  \phi ( t) $,
 +
where the error measure  $  \mu ( f;  \phi ) $
 +
is defined by the formula
 +
 
 +
$$
 +
\mu _  \sigma  ( f;  \phi )  = \int\limits _ { a } ^ { b }  [ f( t) - \phi
 +
( t)]  ^ {2}  d \sigma
 +
( t),
 +
$$
 +
 
 +
where  $  \sigma ( t) $
 +
is a non-decreasing function on $  [ a, b] $
 +
different from a constant.
  
 
Let
 
Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m0631707.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
$$ \tag{* }
 +
u _ {1} ( t), u _ {2} ( t) \dots
 +
$$
  
be an [[Orthonormal system|orthonormal system]] of functions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m0631708.png" /> relative to the distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m0631709.png" />. In the case of a mean-square approximation of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m06317010.png" /> by linear combinations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m06317011.png" />, the minimal error for every <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m06317012.png" /> is given by the sums
+
be an [[Orthonormal system|orthonormal system]] of functions on $  [ a, b] $
 +
relative to the distribution $  d \sigma ( t) $.  
 +
In the case of a mean-square approximation of the function $  f( t) $
 +
by linear combinations $  \sum _ {k=} 1  ^ {n} \lambda _ {k} u _ {k} ( t) $,  
 +
the minimal error for every $  n = 1, 2 \dots $
 +
is given by the sums
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m06317013.png" /></td> </tr></table>
+
$$
 +
\sum _ { k= } 1 ^ { n }  c _ {k} ( f  ) u _ {k} ( t),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m06317014.png" /> are the [[Fourier coefficients|Fourier coefficients]] of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m063/m063170/m06317015.png" /> with respect to the system (*); hence, the best method of approximation is linear.
+
where $  c _ {k} ( f  ) $
 +
are the [[Fourier coefficients|Fourier coefficients]] of the function $  f( t) $
 +
with respect to the system (*); hence, the best method of approximation is linear.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  V.L. Goncharov,  "The theory of interpolation and approximation of functions" , Moscow  (1954)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  G. Szegö,  "Orthogonal polynomials" , Amer. Math. Soc.  (1975)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  V.L. Goncharov,  "The theory of interpolation and approximation of functions" , Moscow  (1954)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  G. Szegö,  "Orthogonal polynomials" , Amer. Math. Soc.  (1975)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Revision as of 08:00, 6 June 2020


An approximation of a function $ f( t) $ by a function $ \phi ( t) $, where the error measure $ \mu ( f; \phi ) $ is defined by the formula

$$ \mu _ \sigma ( f; \phi ) = \int\limits _ { a } ^ { b } [ f( t) - \phi ( t)] ^ {2} d \sigma ( t), $$

where $ \sigma ( t) $ is a non-decreasing function on $ [ a, b] $ different from a constant.

Let

$$ \tag{* } u _ {1} ( t), u _ {2} ( t) \dots $$

be an orthonormal system of functions on $ [ a, b] $ relative to the distribution $ d \sigma ( t) $. In the case of a mean-square approximation of the function $ f( t) $ by linear combinations $ \sum _ {k=} 1 ^ {n} \lambda _ {k} u _ {k} ( t) $, the minimal error for every $ n = 1, 2 \dots $ is given by the sums

$$ \sum _ { k= } 1 ^ { n } c _ {k} ( f ) u _ {k} ( t), $$

where $ c _ {k} ( f ) $ are the Fourier coefficients of the function $ f( t) $ with respect to the system (*); hence, the best method of approximation is linear.

References

[1] V.L. Goncharov, "The theory of interpolation and approximation of functions" , Moscow (1954) (In Russian)
[2] G. Szegö, "Orthogonal polynomials" , Amer. Math. Soc. (1975)

Comments

Cf. also Approximation in the mean; Approximation of functions; Approximation of functions, linear methods; Best approximation; Best approximation in the mean; Best linear method.

References

[a1] E.W. Cheney, "Introduction to approximation theory" , McGraw-Hill (1966) pp. Chapts. 4&6
[a2] I.P. Natanson, "Constructive theory of functions" , 1–2 , F. Ungar (1964–1965) (Translated from Russian)
How to Cite This Entry:
Mean-square approximation of a function. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Mean-square_approximation_of_a_function&oldid=17600
This article was adapted from an original article by N.P. KorneichukV.P. Motornyi (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article