Namespaces
Variants
Actions

Difference between revisions of "Best linear unbiased estimator"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (tex encoded by computer)
m (fix tex)
 
Line 20: Line 20:
  
 
be a [[Linear regression|linear regression]] model, where  $  Y $
 
be a [[Linear regression|linear regression]] model, where  $  Y $
is a random column vector of  $  n $"
+
is a random column vector of  $  n $ "measurements" ,  $  X \in \mathbf R ^ {n \times p } $
measurements" ,  $  X \in \mathbf R ^ {n \times p } $
 
 
is a known non-random  "plan"  matrix,  $  \beta \in \mathbf R ^ {p \times1 } $
 
is a known non-random  "plan"  matrix,  $  \beta \in \mathbf R ^ {p \times1 } $
 
is an unknown vector of the parameters, and  $  \epsilon $
 
is an unknown vector of the parameters, and  $  \epsilon $
Line 51: Line 50:
 
for any  $  K $.  
 
for any  $  K $.  
 
It is then given by the formula  $  K {\widehat \beta  } $,  
 
It is then given by the formula  $  K {\widehat \beta  } $,  
where  $  {\widehat \beta  } = { {\beta _ {V} } hat } = ( X  ^ {T} V ^ {-1 } X ) ^ {-1 } X  ^ {T} V ^ {-1 } Y $,  
+
where  $  {\widehat \beta  } = { {{\widehat \beta   } _ {V} } } = ( X  ^ {T} V ^ {-1 } X ) ^ {-1 } X  ^ {T} V ^ {-1 } Y $,  
 
which coincides by the Gauss–Markov theorem (cf. [[Least squares, method of|Least squares, method of]]) with the least square estimator of  $  \beta $,  
 
which coincides by the Gauss–Markov theorem (cf. [[Least squares, method of|Least squares, method of]]) with the least square estimator of  $  \beta $,  
 
defined as  $  { \mathop{\rm arg} } { \mathop{\rm min} } _  \beta  ( Y - X \beta )  ^ {T} V ^ {- 1 } ( Y - X \beta ) $;  
 
defined as  $  { \mathop{\rm arg} } { \mathop{\rm min} } _  \beta  ( Y - X \beta )  ^ {T} V ^ {- 1 } ( Y - X \beta ) $;  
Line 58: Line 57:
  
 
Because  $  V = { \mathop{\rm Var} } ( \epsilon ) $
 
Because  $  V = { \mathop{\rm Var} } ( \epsilon ) $
is normally not known, Yu.A. Rozanov [[#References|[a2]]] has suggested to use a  "pseudo-best"  estimator  $  { {\beta _ {W} } hat } $
+
is normally not known, Yu.A. Rozanov [[#References|[a2]]] has suggested to use a  "pseudo-best"  estimator  $  { {{\widehat \beta   } _ {W} } } $
in place of  $  { {\beta _ {V} } hat } $,  
+
in place of  $  { {{\widehat \beta   } _ {V} } } $,  
 
with an appropriately chosen  $  W $.  
 
with an appropriately chosen  $  W $.  
 
This idea has been further developed by A.M. Samarov [[#References|[a3]]] and I.F. Pinelis [[#References|[a4]]]. In particular, Pinelis has obtained duality theorems for the minimax risk and equations for the minimax solutions  $  V $
 
This idea has been further developed by A.M. Samarov [[#References|[a3]]] and I.F. Pinelis [[#References|[a4]]]. In particular, Pinelis has obtained duality theorems for the minimax risk and equations for the minimax solutions  $  V $
Line 75: Line 74:
  
 
where  $  S $
 
where  $  S $
is any non-negative-definite  $  ( p \times p ) $-
+
is any non-negative-definite  $  ( p \times p ) $-matrix and  $  {\mathsf E} _ {V} $
matrix and  $  {\mathsf E} _ {V} $
 
 
stands for the expectation assuming  $  { \mathop{\rm Var} } ( \epsilon ) = V $.  
 
stands for the expectation assuming  $  { \mathop{\rm Var} } ( \epsilon ) = V $.  
 
Asymptotic versions of these results have also been given by Pinelis for the case when the  "noise"  is a second-order stationary stochastic process with an unknown spectral density belonging to an arbitrary, but known, convex class of spectral densities and by Samarov in the case of contamination classes.
 
Asymptotic versions of these results have also been given by Pinelis for the case when the  "noise"  is a second-order stationary stochastic process with an unknown spectral density belonging to an arbitrary, but known, convex class of spectral densities and by Samarov in the case of contamination classes.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  C.R. Rao,  "Linear statistical inference and its applications" , Wiley  (1965)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  Yu.A. Rozanov,  "On a new class of estimates" , ''Multivariate Analysis'' , '''2''' , Acad. Press  (1969)  pp. 437–441</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  A.M. Samarov,  "Robust spectral regression"  ''Ann. Math. Stat.'' , '''15'''  (1987)  pp. 99–111</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  I.F. Pinelis,  "On the minimax estimation of regression"  ''Th. Probab. Appl.'' , '''35'''  (1990)  pp. 500–512</TD></TR></table>
+
<table>
 +
<TR><TD valign="top">[a1]</TD> <TD valign="top">  C.R. Rao,  "Linear statistical inference and its applications" , Wiley  (1965)</TD></TR>
 +
<TR><TD valign="top">[a2]</TD> <TD valign="top">  Yu.A. Rozanov,  "On a new class of estimates" , ''Multivariate Analysis'' , '''2''' , Acad. Press  (1969)  pp. 437–441</TD></TR>
 +
<TR><TD valign="top">[a3]</TD> <TD valign="top">  A.M. Samarov,  "Robust spectral regression"  ''Ann. Math. Stat.'' , '''15'''  (1987)  pp. 99–111</TD></TR>
 +
<TR><TD valign="top">[a4]</TD> <TD valign="top">  I.F. Pinelis,  "On the minimax estimation of regression"  ''Th. Probab. Appl.'' , '''35'''  (1990)  pp. 500–512</TD></TR>
 +
</table>

Latest revision as of 10:03, 20 July 2021


BLUE

Let

$$ \tag{a1 } Y = X \beta + \epsilon $$

be a linear regression model, where $ Y $ is a random column vector of $ n $ "measurements" , $ X \in \mathbf R ^ {n \times p } $ is a known non-random "plan" matrix, $ \beta \in \mathbf R ^ {p \times1 } $ is an unknown vector of the parameters, and $ \epsilon $ is a random "error" , or "noise" , vector with mean $ {\mathsf E} \epsilon =0 $ and a possibly unknown non-singular covariance matrix $ V = { \mathop{\rm Var} } ( \epsilon ) $. A model with linear restrictions on $ \beta $ can be obviously reduced to (a1). Without loss of generality, $ { \mathop{\rm rank} } ( X ) = p $.

Let $ K \in \mathbf R ^ {k \times p } $; a linear unbiased estimator (LUE) of $ K \beta $ is a statistical estimator of the form $ MY $ for some non-random matrix $ M \in \mathbf R ^ {k \times n } $ such that $ {\mathsf E} MY = K \beta $ for all $ \beta \in \mathbf R ^ {p \times1 } $, i.e., $ MX = K $. A linear unbiased estimator $ M _ {*} Y $ of $ K \beta $ is called a best linear unbiased estimator (BLUE) of $ K \beta $ if $ { \mathop{\rm Var} } ( M _ {*} Y ) \leq { \mathop{\rm Var} } ( MY ) $ for all linear unbiased estimators $ MY $ of $ K \beta $, i.e., if $ { \mathop{\rm Var} } ( aM _ {*} Y ) \leq { \mathop{\rm Var} } ( aMY ) $ for all linear unbiased estimators $ MY $ of $ K \beta $ and all $ a \in \mathbf R ^ {1 \times k } $.

Since it is assumed that $ { \mathop{\rm rank} } ( X ) = p $, there exists a unique best linear unbiased estimator of $ K \beta $ for any $ K $. It is then given by the formula $ K {\widehat \beta } $, where $ {\widehat \beta } = { {{\widehat \beta } _ {V} } } = ( X ^ {T} V ^ {-1 } X ) ^ {-1 } X ^ {T} V ^ {-1 } Y $, which coincides by the Gauss–Markov theorem (cf. Least squares, method of) with the least square estimator of $ \beta $, defined as $ { \mathop{\rm arg} } { \mathop{\rm min} } _ \beta ( Y - X \beta ) ^ {T} V ^ {- 1 } ( Y - X \beta ) $; as usual, $ {} ^ {T} $ stands for transposition.

Because $ V = { \mathop{\rm Var} } ( \epsilon ) $ is normally not known, Yu.A. Rozanov [a2] has suggested to use a "pseudo-best" estimator $ { {{\widehat \beta } _ {W} } } $ in place of $ { {{\widehat \beta } _ {V} } } $, with an appropriately chosen $ W $. This idea has been further developed by A.M. Samarov [a3] and I.F. Pinelis [a4]. In particular, Pinelis has obtained duality theorems for the minimax risk and equations for the minimax solutions $ V $ assumed to belong to an arbitrary known convex set $ {\mathcal V} $ of positive-definite $ ( n \times n ) $- matrices with respect to the general quadratic risk function of the form

$$ R ( V,W ) = {\mathsf E} _ {V} ( {\widehat \beta } _ {W} - \beta ) ^ {T} S ( {\widehat \beta } _ {W} - \beta ) , $$

$$ V \in {\mathcal V}, W \in {\mathcal V}, $$

where $ S $ is any non-negative-definite $ ( p \times p ) $-matrix and $ {\mathsf E} _ {V} $ stands for the expectation assuming $ { \mathop{\rm Var} } ( \epsilon ) = V $. Asymptotic versions of these results have also been given by Pinelis for the case when the "noise" is a second-order stationary stochastic process with an unknown spectral density belonging to an arbitrary, but known, convex class of spectral densities and by Samarov in the case of contamination classes.

References

[a1] C.R. Rao, "Linear statistical inference and its applications" , Wiley (1965)
[a2] Yu.A. Rozanov, "On a new class of estimates" , Multivariate Analysis , 2 , Acad. Press (1969) pp. 437–441
[a3] A.M. Samarov, "Robust spectral regression" Ann. Math. Stat. , 15 (1987) pp. 99–111
[a4] I.F. Pinelis, "On the minimax estimation of regression" Th. Probab. Appl. , 35 (1990) pp. 500–512
How to Cite This Entry:
Best linear unbiased estimator. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Best_linear_unbiased_estimator&oldid=51758
This article was adapted from an original article by I. Pinelis (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article