Namespaces
Variants
Actions

Best linear unbiased estimator

From Encyclopedia of Mathematics
Jump to: navigation, search


BLUE

Let

\tag{a1 } Y = X \beta + \epsilon

be a linear regression model, where Y is a random column vector of n "measurements" , X \in \mathbf R ^ {n \times p } is a known non-random "plan" matrix, \beta \in \mathbf R ^ {p \times1 } is an unknown vector of the parameters, and \epsilon is a random "error" , or "noise" , vector with mean {\mathsf E} \epsilon =0 and a possibly unknown non-singular covariance matrix V = { \mathop{\rm Var} } ( \epsilon ) . A model with linear restrictions on \beta can be obviously reduced to (a1). Without loss of generality, { \mathop{\rm rank} } ( X ) = p .

Let K \in \mathbf R ^ {k \times p } ; a linear unbiased estimator (LUE) of K \beta is a statistical estimator of the form MY for some non-random matrix M \in \mathbf R ^ {k \times n } such that {\mathsf E} MY = K \beta for all \beta \in \mathbf R ^ {p \times1 } , i.e., MX = K . A linear unbiased estimator M _ {*} Y of K \beta is called a best linear unbiased estimator (BLUE) of K \beta if { \mathop{\rm Var} } ( M _ {*} Y ) \leq { \mathop{\rm Var} } ( MY ) for all linear unbiased estimators MY of K \beta , i.e., if { \mathop{\rm Var} } ( aM _ {*} Y ) \leq { \mathop{\rm Var} } ( aMY ) for all linear unbiased estimators MY of K \beta and all a \in \mathbf R ^ {1 \times k } .

Since it is assumed that { \mathop{\rm rank} } ( X ) = p , there exists a unique best linear unbiased estimator of K \beta for any K . It is then given by the formula K {\widehat \beta } , where {\widehat \beta } = { {{\widehat \beta } _ {V} } } = ( X ^ {T} V ^ {-1 } X ) ^ {-1 } X ^ {T} V ^ {-1 } Y , which coincides by the Gauss–Markov theorem (cf. Least squares, method of) with the least square estimator of \beta , defined as { \mathop{\rm arg} } { \mathop{\rm min} } _ \beta ( Y - X \beta ) ^ {T} V ^ {- 1 } ( Y - X \beta ) ; as usual, {} ^ {T} stands for transposition.

Because V = { \mathop{\rm Var} } ( \epsilon ) is normally not known, Yu.A. Rozanov [a2] has suggested to use a "pseudo-best" estimator { {{\widehat \beta } _ {W} } } in place of { {{\widehat \beta } _ {V} } } , with an appropriately chosen W . This idea has been further developed by A.M. Samarov [a3] and I.F. Pinelis [a4]. In particular, Pinelis has obtained duality theorems for the minimax risk and equations for the minimax solutions V assumed to belong to an arbitrary known convex set {\mathcal V} of positive-definite ( n \times n ) - matrices with respect to the general quadratic risk function of the form

R ( V,W ) = {\mathsf E} _ {V} ( {\widehat \beta } _ {W} - \beta ) ^ {T} S ( {\widehat \beta } _ {W} - \beta ) ,

V \in {\mathcal V}, W \in {\mathcal V},

where S is any non-negative-definite ( p \times p ) -matrix and {\mathsf E} _ {V} stands for the expectation assuming { \mathop{\rm Var} } ( \epsilon ) = V . Asymptotic versions of these results have also been given by Pinelis for the case when the "noise" is a second-order stationary stochastic process with an unknown spectral density belonging to an arbitrary, but known, convex class of spectral densities and by Samarov in the case of contamination classes.

References

[a1] C.R. Rao, "Linear statistical inference and its applications" , Wiley (1965)
[a2] Yu.A. Rozanov, "On a new class of estimates" , Multivariate Analysis , 2 , Acad. Press (1969) pp. 437–441
[a3] A.M. Samarov, "Robust spectral regression" Ann. Math. Stat. , 15 (1987) pp. 99–111
[a4] I.F. Pinelis, "On the minimax estimation of regression" Th. Probab. Appl. , 35 (1990) pp. 500–512
How to Cite This Entry:
Best linear unbiased estimator. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Best_linear_unbiased_estimator&oldid=51758
This article was adapted from an original article by I. Pinelis (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article