Difference between revisions of "Rao-Blackwell-Kolmogorov theorem"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (moved Rao–Blackwell–Kolmogorov theorem to Rao-Blackwell-Kolmogorov theorem: ascii title) |
(No difference)
|
Revision as of 18:54, 24 March 2012
A proposition from the theory of statistical estimation on which a method for the improvement of unbiased statistical estimators is based.
Let be a random variable with values in a sample space , , such that the family of probability distributions has a sufficient statistic , and let be a vector statistic with finite matrix of second moments. Then the mean of exists and, moreover, the conditional mean is an unbiased estimator for , that is,
The Rao–Blackwell–Kolmogorov theorem states that under these conditions the quadratic risk of does not exceed the quadratic risk of , uniformly in , i.e. for any vector of the same dimension as , the inequality
holds for any . In particular, if is a one-dimensional statistic, then for any the variance of does not exceed the variance of .
In the most general situation the Rao–Blackwell–Kolmogorov theorem states that averaging over a sufficient statistic does not lead to an increase of the risk with respect to any convex loss function. This implies that good statistical estimators should be looked for only in terms of sufficient statistics, that is, in the class of functions of sufficient statistics.
In case the family is complete, that is, when the function of that is almost-everywhere equal to zero is the only unbiased estimator based on for zero, the unbiased estimator with uniformly minimal risk provided by the Rao–Blackwell–Kolmogorov theorem is unique. Thus, the Rao–Blackwell–Kolmogorov theorem gives a recipe for constructing best unbiased estimators: one has to take some unbiased estimator and then average it over a sufficient statistic. That is how the best unbiased estimator for the distribution function of the normal law is constructed in the following example, which is due to A.N. Kolmogorov.
Example. Given a realization of a random vector whose components , , , are independent random variables subject to the same normal law , it is required to estimate the distribution function
The parameters and are supposed to be unknown. Since the family
of normal laws has a complete sufficient statistic , where
and
the Rao–Blackwell–Kolmogorov theorem can be used for the construction of the best unbiased estimator for the distribution function . As an initial statistic one may use, e.g., the empirical distribution function constructed from an arbitrary component of :
This is a trivial unbiased estimator for , since
Averaging of over the sufficient statistic gives the estimator
(1) |
Since the statistic
which is complementary to , has a uniform distribution on the -dimensional sphere of radius and, therefore, depends neither on the unknown parameters and nor on , the same is true for and
(2) |
where
(3) |
is the Thompson distribution with degrees of freedom. Thus, (1)–(3) imply that the best unbiased estimator for obtained from independent observations is
where is the Student distribution with degrees of freedom.
References
[1] | A.N. Kolmogorov, "Unbiased estimates" Izv. Akad. Nauk SSSR Ser. Mat. , 14 : 4 (1950) pp. 303–326 (In Russian) |
[2] | C.R. Rao, "Linear statistical inference and its applications" , Wiley (1965) |
[3] | B.L. van der Waerden, "Mathematische Statistik" , Springer (1957) |
[4] | D. Blackwell, "Conditional expectation and unbiased sequential estimation" Ann. Math. Stat. , 18 (1947) pp. 105–110 |
Comments
In the Western literature this theorem is mostly referred to as the Rao–Blackwell theorem.
Rao-Blackwell-Kolmogorov theorem. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Rao-Blackwell-Kolmogorov_theorem&oldid=22963