Namespaces
Variants
Actions

Difference between revisions of "Consistent estimator"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
c0252401.png
 +
$#A+1 = 28 n = 0
 +
$#C+1 = 28 : ~/encyclopedia/old_files/data/C025/C.0205240 Consistent estimator
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
An abbreviated form of the term  "consistent sequence of estimators" , applied to a sequence of statistical estimators converging to a value being evaluated.
 
An abbreviated form of the term  "consistent sequence of estimators" , applied to a sequence of statistical estimators converging to a value being evaluated.
  
Line 5: Line 17:
 
Example 1)
 
Example 1)
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c0252401.png" /> be independent random variables with the same normal distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c0252402.png" />. Then the statistics
+
Let $  X _ {1} \dots X _ {n} $
 +
be independent random variables with the same normal distribution $  N ( a, \sigma  ^ {2} ) $.  
 +
Then the statistics
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c0252403.png" /></td> </tr></table>
+
$$
 +
\overline{X}\; _ {n}  = \
 +
{
 +
\frac{1}{n}
 +
}
 +
( X _ {1} + \dots + X _ {n} )
 +
$$
  
 
and
 
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c0252404.png" /></td> </tr></table>
+
$$
 +
S _ {n}  ^ {2}  = \
 +
{
 +
\frac{1}{n}
 +
}
 +
\sum _ {i = 1 } ^ { n }
 +
( X _ {i} - \overline{X}\; )  ^ {2}
 +
$$
  
are consistent estimators for the parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c0252405.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c0252406.png" />, respectively.
+
are consistent estimators for the parameters $  a $
 +
and $  \sigma  ^ {2} $,  
 +
respectively.
  
Example 2) Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c0252407.png" /> be independent random variables subject to the same probability law, the distribution function of which is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c0252408.png" />. In this case, the [[Empirical distribution|empirical distribution]] function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c0252409.png" /> constructed from an initial sample <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524010.png" /> is a consistent estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524011.png" />.
+
Example 2) Let $  X _ {1} \dots X _ {n} $
 +
be independent random variables subject to the same probability law, the distribution function of which is $  F ( x) $.  
 +
In this case, the [[Empirical distribution|empirical distribution]] function $  F _ {n} ( x) $
 +
constructed from an initial sample $  X _ {1} \dots X _ {n} $
 +
is a consistent estimator of $  F ( x) $.
  
Example 3) Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524012.png" /> be independent random variables subject to the same Cauchy law, whose probability density is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524013.png" />. For any natural number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524014.png" />, the statistic
+
Example 3) Let $  X _ {1} \dots X _ {n} $
 +
be independent random variables subject to the same Cauchy law, whose probability density is $  p ( x) = 1/[ 1 + ( x - \mu )  ^ {2} ] $.  
 +
For any natural number $  n $,  
 +
the statistic
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524015.png" /></td> </tr></table>
+
$$
 +
\overline{X}\; _ {n}  = \
 +
{
 +
\frac{1}{n}
 +
}
 +
( X _ {1} + \dots + X _ {n} )
 +
$$
  
is subject to the initial Cauchy law, hence the sequence of estimators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524016.png" /> does not converge in probability to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524017.png" />, that is, in the given example the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524018.png" /> is not consistent. A consistent estimator for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524019.png" /> here is the sample median.
+
is subject to the initial Cauchy law, hence the sequence of estimators $  \overline{X}\; _ {n} $
 +
does not converge in probability to $  \mu $,  
 +
that is, in the given example the sequence $  \overline{X}\; _ {n} $
 +
is not consistent. A consistent estimator for $  \mu $
 +
here is the sample median.
  
A consistent estimator has the following property: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524020.png" /> is a continuous function and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524021.png" /> is a consistent estimator of a parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524022.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524023.png" /> is a consistent estimator for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524024.png" />. The most common method for obtaining statistical point estimators is the [[Maximum-likelihood method|maximum-likelihood method]], which gives a consistent estimator. It must be noted that a consistent estimator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524025.png" /> of a parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524026.png" /> is not unique, since any estimator of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524027.png" /> is also consistent, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c025/c025240/c02524028.png" /> is a sequence of random variables converging in probability to zero. This fact reduces the value of the concept of a consistent estimator.
+
A consistent estimator has the following property: If $  f $
 +
is a continuous function and $  T _ {n} $
 +
is a consistent estimator of a parameter $  \theta $,  
 +
then $  f ( T _ {n} ) $
 +
is a consistent estimator for $  f ( \theta ) $.  
 +
The most common method for obtaining statistical point estimators is the [[Maximum-likelihood method|maximum-likelihood method]], which gives a consistent estimator. It must be noted that a consistent estimator $  T _ {n} $
 +
of a parameter $  \theta $
 +
is not unique, since any estimator of the form $  T _ {n} + \beta _ {n} $
 +
is also consistent, where $  \beta _ {n} $
 +
is a sequence of random variables converging in probability to zero. This fact reduces the value of the concept of a consistent estimator.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  H. Cramér,  "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.A. Ibragimov,  R.Z. [R.Z. Khas'minskii] Has'minskii,  "Statistical estimation: asymptotic theory" , Springer  (1981)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  H. Cramér,  "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.A. Ibragimov,  R.Z. [R.Z. Khas'minskii] Has'minskii,  "Statistical estimation: asymptotic theory" , Springer  (1981)  (Translated from Russian)</TD></TR></table>

Latest revision as of 17:46, 4 June 2020


An abbreviated form of the term "consistent sequence of estimators" , applied to a sequence of statistical estimators converging to a value being evaluated.

In probability theory, there are several different notions of the concept of convergence, of which the most important for the theory of statistical estimation are convergence in probability and convergence with probability 1. If a sequence of statistical estimators converges in probability to the value being evaluated, then one says that this sequence is "weakly consistent estimatorweakly consistent" or simply "consistent" , and one reserves the term "strongly consistent estimatorstrong consistency" for a sequence of estimators that converges with probability 1 to the value being evaluated.

Example 1)

Let $ X _ {1} \dots X _ {n} $ be independent random variables with the same normal distribution $ N ( a, \sigma ^ {2} ) $. Then the statistics

$$ \overline{X}\; _ {n} = \ { \frac{1}{n} } ( X _ {1} + \dots + X _ {n} ) $$

and

$$ S _ {n} ^ {2} = \ { \frac{1}{n} } \sum _ {i = 1 } ^ { n } ( X _ {i} - \overline{X}\; ) ^ {2} $$

are consistent estimators for the parameters $ a $ and $ \sigma ^ {2} $, respectively.

Example 2) Let $ X _ {1} \dots X _ {n} $ be independent random variables subject to the same probability law, the distribution function of which is $ F ( x) $. In this case, the empirical distribution function $ F _ {n} ( x) $ constructed from an initial sample $ X _ {1} \dots X _ {n} $ is a consistent estimator of $ F ( x) $.

Example 3) Let $ X _ {1} \dots X _ {n} $ be independent random variables subject to the same Cauchy law, whose probability density is $ p ( x) = 1/[ 1 + ( x - \mu ) ^ {2} ] $. For any natural number $ n $, the statistic

$$ \overline{X}\; _ {n} = \ { \frac{1}{n} } ( X _ {1} + \dots + X _ {n} ) $$

is subject to the initial Cauchy law, hence the sequence of estimators $ \overline{X}\; _ {n} $ does not converge in probability to $ \mu $, that is, in the given example the sequence $ \overline{X}\; _ {n} $ is not consistent. A consistent estimator for $ \mu $ here is the sample median.

A consistent estimator has the following property: If $ f $ is a continuous function and $ T _ {n} $ is a consistent estimator of a parameter $ \theta $, then $ f ( T _ {n} ) $ is a consistent estimator for $ f ( \theta ) $. The most common method for obtaining statistical point estimators is the maximum-likelihood method, which gives a consistent estimator. It must be noted that a consistent estimator $ T _ {n} $ of a parameter $ \theta $ is not unique, since any estimator of the form $ T _ {n} + \beta _ {n} $ is also consistent, where $ \beta _ {n} $ is a sequence of random variables converging in probability to zero. This fact reduces the value of the concept of a consistent estimator.

References

[1] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)
[2] I.A. Ibragimov, R.Z. [R.Z. Khas'minskii] Has'minskii, "Statistical estimation: asymptotic theory" , Springer (1981) (Translated from Russian)
How to Cite This Entry:
Consistent estimator. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Consistent_estimator&oldid=14325
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article