# Consistent estimator

An abbreviated form of the term "consistent sequence of estimators" , applied to a sequence of statistical estimators converging to a value being evaluated.

In probability theory, there are several different notions of the concept of convergence, of which the most important for the theory of statistical estimation are convergence in probability and convergence with probability 1. If a sequence of statistical estimators converges in probability to the value being evaluated, then one says that this sequence is "weakly consistent estimatorweakly consistent" or simply "consistent" , and one reserves the term "strongly consistent estimatorstrong consistency" for a sequence of estimators that converges with probability 1 to the value being evaluated.

Example 1)

Let $X _ {1} \dots X _ {n}$ be independent random variables with the same normal distribution $N ( a, \sigma ^ {2} )$. Then the statistics

$$\overline{X}\; _ {n} = \ { \frac{1}{n} } ( X _ {1} + \dots + X _ {n} )$$

and

$$S _ {n} ^ {2} = \ { \frac{1}{n} } \sum _ {i = 1 } ^ { n } ( X _ {i} - \overline{X}\; ) ^ {2}$$

are consistent estimators for the parameters $a$ and $\sigma ^ {2}$, respectively.

Example 2) Let $X _ {1} \dots X _ {n}$ be independent random variables subject to the same probability law, the distribution function of which is $F ( x)$. In this case, the empirical distribution function $F _ {n} ( x)$ constructed from an initial sample $X _ {1} \dots X _ {n}$ is a consistent estimator of $F ( x)$.

Example 3) Let $X _ {1} \dots X _ {n}$ be independent random variables subject to the same Cauchy law, whose probability density is $p ( x) = 1/[ 1 + ( x - \mu ) ^ {2} ]$. For any natural number $n$, the statistic

$$\overline{X}\; _ {n} = \ { \frac{1}{n} } ( X _ {1} + \dots + X _ {n} )$$

is subject to the initial Cauchy law, hence the sequence of estimators $\overline{X}\; _ {n}$ does not converge in probability to $\mu$, that is, in the given example the sequence $\overline{X}\; _ {n}$ is not consistent. A consistent estimator for $\mu$ here is the sample median.

A consistent estimator has the following property: If $f$ is a continuous function and $T _ {n}$ is a consistent estimator of a parameter $\theta$, then $f ( T _ {n} )$ is a consistent estimator for $f ( \theta )$. The most common method for obtaining statistical point estimators is the maximum-likelihood method, which gives a consistent estimator. It must be noted that a consistent estimator $T _ {n}$ of a parameter $\theta$ is not unique, since any estimator of the form $T _ {n} + \beta _ {n}$ is also consistent, where $\beta _ {n}$ is a sequence of random variables converging in probability to zero. This fact reduces the value of the concept of a consistent estimator.

How to Cite This Entry:
Consistent estimator. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Consistent_estimator&oldid=46481
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article