Difference between revisions of "Consistent estimator"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
+ | <!-- | ||
+ | c0252401.png | ||
+ | $#A+1 = 28 n = 0 | ||
+ | $#C+1 = 28 : ~/encyclopedia/old_files/data/C025/C.0205240 Consistent estimator | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
+ | |||
+ | {{TEX|auto}} | ||
+ | {{TEX|done}} | ||
+ | |||
An abbreviated form of the term "consistent sequence of estimators" , applied to a sequence of statistical estimators converging to a value being evaluated. | An abbreviated form of the term "consistent sequence of estimators" , applied to a sequence of statistical estimators converging to a value being evaluated. | ||
Line 5: | Line 17: | ||
Example 1) | Example 1) | ||
− | Let | + | Let $ X _ {1} \dots X _ {n} $ |
+ | be independent random variables with the same normal distribution $ N ( a, \sigma ^ {2} ) $. | ||
+ | Then the statistics | ||
− | + | $$ | |
+ | \overline{X}\; _ {n} = \ | ||
+ | { | ||
+ | \frac{1}{n} | ||
+ | } | ||
+ | ( X _ {1} + \dots + X _ {n} ) | ||
+ | $$ | ||
and | and | ||
− | + | $$ | |
+ | S _ {n} ^ {2} = \ | ||
+ | { | ||
+ | \frac{1}{n} | ||
+ | } | ||
+ | \sum _ {i = 1 } ^ { n } | ||
+ | ( X _ {i} - \overline{X}\; ) ^ {2} | ||
+ | $$ | ||
− | are consistent estimators for the parameters | + | are consistent estimators for the parameters $ a $ |
+ | and $ \sigma ^ {2} $, | ||
+ | respectively. | ||
− | Example 2) Let | + | Example 2) Let $ X _ {1} \dots X _ {n} $ |
+ | be independent random variables subject to the same probability law, the distribution function of which is $ F ( x) $. | ||
+ | In this case, the [[Empirical distribution|empirical distribution]] function $ F _ {n} ( x) $ | ||
+ | constructed from an initial sample $ X _ {1} \dots X _ {n} $ | ||
+ | is a consistent estimator of $ F ( x) $. | ||
− | Example 3) Let | + | Example 3) Let $ X _ {1} \dots X _ {n} $ |
+ | be independent random variables subject to the same Cauchy law, whose probability density is $ p ( x) = 1/[ 1 + ( x - \mu ) ^ {2} ] $. | ||
+ | For any natural number $ n $, | ||
+ | the statistic | ||
− | + | $$ | |
+ | \overline{X}\; _ {n} = \ | ||
+ | { | ||
+ | \frac{1}{n} | ||
+ | } | ||
+ | ( X _ {1} + \dots + X _ {n} ) | ||
+ | $$ | ||
− | is subject to the initial Cauchy law, hence the sequence of estimators | + | is subject to the initial Cauchy law, hence the sequence of estimators $ \overline{X}\; _ {n} $ |
+ | does not converge in probability to $ \mu $, | ||
+ | that is, in the given example the sequence $ \overline{X}\; _ {n} $ | ||
+ | is not consistent. A consistent estimator for $ \mu $ | ||
+ | here is the sample median. | ||
− | A consistent estimator has the following property: If | + | A consistent estimator has the following property: If $ f $ |
+ | is a continuous function and $ T _ {n} $ | ||
+ | is a consistent estimator of a parameter $ \theta $, | ||
+ | then $ f ( T _ {n} ) $ | ||
+ | is a consistent estimator for $ f ( \theta ) $. | ||
+ | The most common method for obtaining statistical point estimators is the [[Maximum-likelihood method|maximum-likelihood method]], which gives a consistent estimator. It must be noted that a consistent estimator $ T _ {n} $ | ||
+ | of a parameter $ \theta $ | ||
+ | is not unique, since any estimator of the form $ T _ {n} + \beta _ {n} $ | ||
+ | is also consistent, where $ \beta _ {n} $ | ||
+ | is a sequence of random variables converging in probability to zero. This fact reduces the value of the concept of a consistent estimator. | ||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> I.A. Ibragimov, R.Z. [R.Z. Khas'minskii] Has'minskii, "Statistical estimation: asymptotic theory" , Springer (1981) (Translated from Russian)</TD></TR></table> | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> I.A. Ibragimov, R.Z. [R.Z. Khas'minskii] Has'minskii, "Statistical estimation: asymptotic theory" , Springer (1981) (Translated from Russian)</TD></TR></table> |
Latest revision as of 17:46, 4 June 2020
An abbreviated form of the term "consistent sequence of estimators" , applied to a sequence of statistical estimators converging to a value being evaluated.
In probability theory, there are several different notions of the concept of convergence, of which the most important for the theory of statistical estimation are convergence in probability and convergence with probability 1. If a sequence of statistical estimators converges in probability to the value being evaluated, then one says that this sequence is "weakly consistent estimatorweakly consistent" or simply "consistent" , and one reserves the term "strongly consistent estimatorstrong consistency" for a sequence of estimators that converges with probability 1 to the value being evaluated.
Example 1)
Let $ X _ {1} \dots X _ {n} $ be independent random variables with the same normal distribution $ N ( a, \sigma ^ {2} ) $. Then the statistics
$$ \overline{X}\; _ {n} = \ { \frac{1}{n} } ( X _ {1} + \dots + X _ {n} ) $$
and
$$ S _ {n} ^ {2} = \ { \frac{1}{n} } \sum _ {i = 1 } ^ { n } ( X _ {i} - \overline{X}\; ) ^ {2} $$
are consistent estimators for the parameters $ a $ and $ \sigma ^ {2} $, respectively.
Example 2) Let $ X _ {1} \dots X _ {n} $ be independent random variables subject to the same probability law, the distribution function of which is $ F ( x) $. In this case, the empirical distribution function $ F _ {n} ( x) $ constructed from an initial sample $ X _ {1} \dots X _ {n} $ is a consistent estimator of $ F ( x) $.
Example 3) Let $ X _ {1} \dots X _ {n} $ be independent random variables subject to the same Cauchy law, whose probability density is $ p ( x) = 1/[ 1 + ( x - \mu ) ^ {2} ] $. For any natural number $ n $, the statistic
$$ \overline{X}\; _ {n} = \ { \frac{1}{n} } ( X _ {1} + \dots + X _ {n} ) $$
is subject to the initial Cauchy law, hence the sequence of estimators $ \overline{X}\; _ {n} $ does not converge in probability to $ \mu $, that is, in the given example the sequence $ \overline{X}\; _ {n} $ is not consistent. A consistent estimator for $ \mu $ here is the sample median.
A consistent estimator has the following property: If $ f $ is a continuous function and $ T _ {n} $ is a consistent estimator of a parameter $ \theta $, then $ f ( T _ {n} ) $ is a consistent estimator for $ f ( \theta ) $. The most common method for obtaining statistical point estimators is the maximum-likelihood method, which gives a consistent estimator. It must be noted that a consistent estimator $ T _ {n} $ of a parameter $ \theta $ is not unique, since any estimator of the form $ T _ {n} + \beta _ {n} $ is also consistent, where $ \beta _ {n} $ is a sequence of random variables converging in probability to zero. This fact reduces the value of the concept of a consistent estimator.
References
[1] | H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946) |
[2] | I.A. Ibragimov, R.Z. [R.Z. Khas'minskii] Has'minskii, "Statistical estimation: asymptotic theory" , Springer (1981) (Translated from Russian) |
Consistent estimator. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Consistent_estimator&oldid=14325