Namespaces
Variants
Actions

Difference between revisions of "Linear estimator"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(details)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
 +
<!--
 +
l0592001.png
 +
$#A+1 = 27 n = 0
 +
$#C+1 = 27 : ~/encyclopedia/old_files/data/L059/L.0509200 Linear estimator
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A linear function of observable random variables, used (when the actual values of the observed variables are substituted into it) as an approximate value (estimate) of an unknown parameter of the stochastic model under analysis (see [[Statistical estimator|Statistical estimator]]). The special selection of the class of linear estimators is justified for the following reasons. Linear estimators lend themselves more easily to statistical analysis, in particular to the investigation of consistency, unbiasedness, efficiency, the construction of corresponding confidence intervals, etc. At the same time, in a fairly wide range of cases the search for  "optimal"  (in a well-defined sense) estimators does not lead beyond the limits of the class of linear estimators. For example, the statistical analysis of a linear regression model (see [[Linear regression|Linear regression]]) of the form
 
A linear function of observable random variables, used (when the actual values of the observed variables are substituted into it) as an approximate value (estimate) of an unknown parameter of the stochastic model under analysis (see [[Statistical estimator|Statistical estimator]]). The special selection of the class of linear estimators is justified for the following reasons. Linear estimators lend themselves more easily to statistical analysis, in particular to the investigation of consistency, unbiasedness, efficiency, the construction of corresponding confidence intervals, etc. At the same time, in a fairly wide range of cases the search for  "optimal"  (in a well-defined sense) estimators does not lead beyond the limits of the class of linear estimators. For example, the statistical analysis of a linear regression model (see [[Linear regression|Linear regression]]) of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l0592001.png" /></td> </tr></table>
+
$$
 +
\mathbf Y  = \mathbf X \pmb\theta + \epsilon
 +
$$
  
gives as best linear unbiased estimator of the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l0592002.png" /> the least-squares estimator
+
gives as best linear unbiased estimator of the parameter $  \pmb\theta $
 +
the least-squares estimator
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l0592003.png" /></td> </tr></table>
+
$$
 +
\widehat{ {\pmb\theta }}  = \
 +
( \mathbf X  ^  \prime  \mathbf X )^{-1}
 +
\mathbf X  ^  \prime  \mathbf Y
 +
$$
  
(linear with respect to the observed values of the random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l0592004.png" /> under investigation). Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l0592005.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l0592006.png" />-dimensional column vector of observed values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l0592007.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l0592008.png" />, of the resulting test (random variable) under investigation, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l0592009.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920010.png" /> matrix (of rank <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920011.png" />) of observed values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920012.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920013.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920014.png" />, of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920015.png" /> non-random factor arguments on which the resulting test <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920016.png" /> depends, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920017.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920018.png" />-dimensional column vector of the unknown parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920019.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920020.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920021.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920022.png" />-dimensional random column vector of residual components, which satisfies the condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920023.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920024.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920025.png" /> being the unit matrix).
+
(linear with respect to the observed values of the random variable $  \mathbf Y $
 +
under investigation). Here $  \mathbf Y $
 +
is the $  n $-
 +
dimensional column vector of observed values $  y _ {i} $,  
 +
$  i = 1 \dots n $,  
 +
of the resulting test (random variable) under investigation, $  \mathbf X $
 +
is the $  n \times p $
 +
matrix (of rank $  p $)  
 +
of observed values $  x _ {i}^{(k)} $,  
 +
$  i = 1 \dots n $,  
 +
$  k = 1 \dots p $,  
 +
of $  p $
 +
non-random factor arguments on which the resulting test $  \mathbf Y $
 +
depends, $  \pmb\theta $
 +
is the $  p $-
 +
dimensional column vector of the unknown parameters $  Q _ {k} $,  
 +
$  k = 1 \dots p $,  
 +
and $  \epsilon $
 +
is the $  n $-
 +
dimensional random column vector of residual components, which satisfies the condition $  {\mathsf E} \epsilon = 0 $,  
 +
$  {\mathsf E} ( \epsilon \epsilon^\prime  ) = \sigma^{2} I $(
 +
$  I $
 +
being the unit matrix).
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> H. Cramér,   "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> C.R. Rao,   "Linear statistical inference and its applications" , Wiley  (1965)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> S. Zacks,   "The theory of statistical inference" , Wiley  (1971)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> H. Scheffé,   "Analysis of variance" , Wiley  (1959)</TD></TR></table>
+
<table>
 
+
<TR><TD valign="top">[1]</TD> <TD valign="top"> H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR>
 
+
<TR><TD valign="top">[2]</TD> <TD valign="top"> C.R. Rao, "Linear statistical inference and its applications" , Wiley  (1965)</TD></TR>
 +
<TR><TD valign="top">[3]</TD> <TD valign="top"> S. Zacks, "The theory of statistical inference" , Wiley  (1971)</TD></TR>
 +
<TR><TD valign="top">[4]</TD> <TD valign="top"> H. Scheffé, "Analysis of variance" , Wiley  (1959)</TD></TR>
 +
</table>
  
 
====Comments====
 
====Comments====
When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920026.png" /> is multivariate normally distributed, the least-squares estimator is actually optimal in a wider class of estimators: it is the minimum-variance [[Unbiased estimator|unbiased estimator]]. Even without the normality assumption it is asymptotically efficient (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/l/l059/l059200/l05920027.png" />) (cf. [[Efficiency, asymptotic|Efficiency, asymptotic]]) among all estimators.
+
When $  \epsilon $
 +
is multivariate normally distributed, the least-squares estimator is actually optimal in a wider class of estimators: it is the minimum-variance [[unbiased estimator]]. Even without the normality assumption it is asymptotically efficient ( $  n \rightarrow \infty $)  
 +
(cf. [[Efficiency, asymptotic]]) among all estimators.

Latest revision as of 15:40, 7 July 2024


A linear function of observable random variables, used (when the actual values of the observed variables are substituted into it) as an approximate value (estimate) of an unknown parameter of the stochastic model under analysis (see Statistical estimator). The special selection of the class of linear estimators is justified for the following reasons. Linear estimators lend themselves more easily to statistical analysis, in particular to the investigation of consistency, unbiasedness, efficiency, the construction of corresponding confidence intervals, etc. At the same time, in a fairly wide range of cases the search for "optimal" (in a well-defined sense) estimators does not lead beyond the limits of the class of linear estimators. For example, the statistical analysis of a linear regression model (see Linear regression) of the form

$$ \mathbf Y = \mathbf X \pmb\theta + \epsilon $$

gives as best linear unbiased estimator of the parameter $ \pmb\theta $ the least-squares estimator

$$ \widehat{ {\pmb\theta }} = \ ( \mathbf X ^ \prime \mathbf X )^{-1} \mathbf X ^ \prime \mathbf Y $$

(linear with respect to the observed values of the random variable $ \mathbf Y $ under investigation). Here $ \mathbf Y $ is the $ n $- dimensional column vector of observed values $ y _ {i} $, $ i = 1 \dots n $, of the resulting test (random variable) under investigation, $ \mathbf X $ is the $ n \times p $ matrix (of rank $ p $) of observed values $ x _ {i}^{(k)} $, $ i = 1 \dots n $, $ k = 1 \dots p $, of $ p $ non-random factor arguments on which the resulting test $ \mathbf Y $ depends, $ \pmb\theta $ is the $ p $- dimensional column vector of the unknown parameters $ Q _ {k} $, $ k = 1 \dots p $, and $ \epsilon $ is the $ n $- dimensional random column vector of residual components, which satisfies the condition $ {\mathsf E} \epsilon = 0 $, $ {\mathsf E} ( \epsilon \epsilon^\prime ) = \sigma^{2} I $( $ I $ being the unit matrix).

References

[1] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)
[2] C.R. Rao, "Linear statistical inference and its applications" , Wiley (1965)
[3] S. Zacks, "The theory of statistical inference" , Wiley (1971)
[4] H. Scheffé, "Analysis of variance" , Wiley (1959)

Comments

When $ \epsilon $ is multivariate normally distributed, the least-squares estimator is actually optimal in a wider class of estimators: it is the minimum-variance unbiased estimator. Even without the normality assumption it is asymptotically efficient ( $ n \rightarrow \infty $) (cf. Efficiency, asymptotic) among all estimators.

How to Cite This Entry:
Linear estimator. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Linear_estimator&oldid=11992
This article was adapted from an original article by S.A. Aivazyan (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article