Namespaces
Variants
Actions

Difference between revisions of "Biased estimator"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
b0160501.png
 +
$#A+1 = 33 n = 0
 +
$#C+1 = 33 : ~/encyclopedia/old_files/data/B016/B.0106050 Biased estimator
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
A statistical estimator whose expectation does not coincide with the value being estimated.
 
A statistical estimator whose expectation does not coincide with the value being estimated.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b0160501.png" /> be a random variable taking values in a sampling space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b0160502.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b0160503.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b0160504.png" /> be a statistical point estimator of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b0160505.png" /> defined on the parameter set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b0160506.png" />. It is assumed that the mathematical expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b0160507.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b0160508.png" /> exists. If the function
+
Let $  X $
 +
be a random variable taking values in a sampling space $  ( \mathfrak X , {\mathcal B} , {\mathsf P} _  \theta  ) $,  
 +
$  \theta \in \Theta $,  
 +
and let $  T = T (X) $
 +
be a statistical point estimator of a function $  f ( \theta ) $
 +
defined on the parameter set $  \Theta $.  
 +
It is assumed that the mathematical expectation $  {\mathsf E} _  \theta  \{ T \} $
 +
of $  T $
 +
exists. If the function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b0160509.png" /></td> </tr></table>
+
$$
 +
b ( \theta )  = \
 +
{\mathsf E} _  \theta  \{ T \} -
 +
f ( \theta )  = \
 +
{\mathsf E} _  \theta  \{ T - f ( \theta ) \}
 +
$$
  
is not identically equal to zero, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605010.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605011.png" /> is called a biased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605012.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605013.png" /> is called the bias or systematic error of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605014.png" />.
+
is not identically equal to zero, that is, $  b ( \theta ) \not\equiv 0 $,  
 +
then $  T $
 +
is called a biased estimator of $  f ( \theta ) $
 +
and b ( \theta ) $
 +
is called the bias or systematic error of $  T $.
  
Example. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605015.png" /> be mutually-independent random variables with the same normal distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605016.png" />, and let
+
Example. Let $  X _ {1} \dots X _ {n} $
 +
be mutually-independent random variables with the same normal distribution $  N _ {1} (a, \sigma  ^ {2} ) $,
 +
and let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605017.png" /></td> </tr></table>
+
$$
 +
\overline{X}\; = \
 +
{
 +
\frac{X _ {1} + \dots + X _ {n} }{n}
 +
} .
 +
$$
  
 
Then the statistic
 
Then the statistic
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605018.png" /></td> </tr></table>
+
$$
 +
S _ {n}  ^ {2}  = \
 +
{
 +
\frac{1}{n}
 +
}
 +
\sum _ {i = 1 } ^ { n }
 +
(X _ {i} - \overline{X}\; )  ^ {2}
 +
$$
 +
 
 +
is a biased estimator of the variance  $  \sigma  ^ {2} $
 +
since
 +
 
 +
$$
 +
{\mathsf E} \{ S _ {n}  ^ {2} \}  = \
 +
{
 +
\frac{n - 1 }{n}
 +
}
 +
\sigma  ^ {2}  = \
 +
\sigma  ^ {2} -
 +
{
 +
\frac{\sigma  ^ {2} }{n}
 +
} ,
 +
$$
  
is a biased estimator of the variance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605019.png" /> since
+
that is, the estimator $  S _ {n}  ^ {2} $
 +
has bias  $  b ( \sigma  ^ {2} ) = - \sigma  ^ {2} /n $.  
 +
The mean-square error of this biased estimator is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605020.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} \{ (S _ {n}  ^ {2} -
 +
\sigma  ^ {2} )  ^ {2} \}  = \
  
that is, the estimator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605021.png" /> has bias <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605022.png" />. The mean-square error of this biased estimator is
+
\frac{2n - 1 }{n  ^ {2} }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605023.png" /></td> </tr></table>
+
\sigma  ^ {4} .
 +
$$
  
The best [[Unbiased estimator|unbiased estimator]] of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605024.png" /> is the statistic
+
The best [[Unbiased estimator|unbiased estimator]] of $  \sigma  ^ {2} $
 +
is the statistic
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605025.png" /></td> </tr></table>
+
$$
 +
s _ {n}  ^ {2}  = \
 +
{
 +
\frac{n}{n - 1 }
 +
}
 +
S _ {n}  ^ {2}  = \
 +
{
 +
\frac{1}{n - 1 }
 +
}
 +
\sum _ {i = 1 } ^ { n }
 +
(X _ {i} - \overline{X}\; )  ^ {2} ,
 +
$$
  
 
with mean-square error
 
with mean-square error
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605026.png" /></td> </tr></table>
+
$$
 +
{\mathsf D} \{ s _ {n}  ^ {2} \}  = \
 +
{\mathsf E} \{ (s _ {n}  ^ {2} - \sigma  ^ {2} )  ^ {2} \}  = \
 +
{
 +
\frac{2}{n - 1 }
 +
}
 +
\sigma  ^ {4} .
 +
$$
  
When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605027.png" />, the mean-square error of the biased estimator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605028.png" /> is less than that of the best unbiased estimator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605029.png" />.
+
When $  n > 2 $,  
 +
the mean-square error of the biased estimator $  S _ {n}  ^ {2} $
 +
is less than that of the best unbiased estimator $  s _ {n}  ^ {2} $.
  
There are situations when unbiased estimators do not exist. For example, there is no unbiased estimator for the absolute value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605030.png" /> of the mathematical expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605031.png" /> of the normal law <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605032.png" />, that is, it is only possible to construct biased estimators for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/b/b016/b016050/b01605033.png" />.
+
There are situations when unbiased estimators do not exist. For example, there is no unbiased estimator for the absolute value $  | a | $
 +
of the mathematical expectation $  a $
 +
of the normal law $  N _ {1} (a, \sigma  ^ {2} ) $,
 +
that is, it is only possible to construct biased estimators for $  | a | $.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  H. Cramér,  "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  H. Cramér,  "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR></table>

Latest revision as of 10:59, 29 May 2020


A statistical estimator whose expectation does not coincide with the value being estimated.

Let $ X $ be a random variable taking values in a sampling space $ ( \mathfrak X , {\mathcal B} , {\mathsf P} _ \theta ) $, $ \theta \in \Theta $, and let $ T = T (X) $ be a statistical point estimator of a function $ f ( \theta ) $ defined on the parameter set $ \Theta $. It is assumed that the mathematical expectation $ {\mathsf E} _ \theta \{ T \} $ of $ T $ exists. If the function

$$ b ( \theta ) = \ {\mathsf E} _ \theta \{ T \} - f ( \theta ) = \ {\mathsf E} _ \theta \{ T - f ( \theta ) \} $$

is not identically equal to zero, that is, $ b ( \theta ) \not\equiv 0 $, then $ T $ is called a biased estimator of $ f ( \theta ) $ and $ b ( \theta ) $ is called the bias or systematic error of $ T $.

Example. Let $ X _ {1} \dots X _ {n} $ be mutually-independent random variables with the same normal distribution $ N _ {1} (a, \sigma ^ {2} ) $, and let

$$ \overline{X}\; = \ { \frac{X _ {1} + \dots + X _ {n} }{n} } . $$

Then the statistic

$$ S _ {n} ^ {2} = \ { \frac{1}{n} } \sum _ {i = 1 } ^ { n } (X _ {i} - \overline{X}\; ) ^ {2} $$

is a biased estimator of the variance $ \sigma ^ {2} $ since

$$ {\mathsf E} \{ S _ {n} ^ {2} \} = \ { \frac{n - 1 }{n} } \sigma ^ {2} = \ \sigma ^ {2} - { \frac{\sigma ^ {2} }{n} } , $$

that is, the estimator $ S _ {n} ^ {2} $ has bias $ b ( \sigma ^ {2} ) = - \sigma ^ {2} /n $. The mean-square error of this biased estimator is

$$ {\mathsf E} \{ (S _ {n} ^ {2} - \sigma ^ {2} ) ^ {2} \} = \ \frac{2n - 1 }{n ^ {2} } \sigma ^ {4} . $$

The best unbiased estimator of $ \sigma ^ {2} $ is the statistic

$$ s _ {n} ^ {2} = \ { \frac{n}{n - 1 } } S _ {n} ^ {2} = \ { \frac{1}{n - 1 } } \sum _ {i = 1 } ^ { n } (X _ {i} - \overline{X}\; ) ^ {2} , $$

with mean-square error

$$ {\mathsf D} \{ s _ {n} ^ {2} \} = \ {\mathsf E} \{ (s _ {n} ^ {2} - \sigma ^ {2} ) ^ {2} \} = \ { \frac{2}{n - 1 } } \sigma ^ {4} . $$

When $ n > 2 $, the mean-square error of the biased estimator $ S _ {n} ^ {2} $ is less than that of the best unbiased estimator $ s _ {n} ^ {2} $.

There are situations when unbiased estimators do not exist. For example, there is no unbiased estimator for the absolute value $ | a | $ of the mathematical expectation $ a $ of the normal law $ N _ {1} (a, \sigma ^ {2} ) $, that is, it is only possible to construct biased estimators for $ | a | $.

References

[1] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)
How to Cite This Entry:
Biased estimator. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Biased_estimator&oldid=46049
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article