Namespaces
Variants
Actions

Difference between revisions of "Inefficient statistic"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
 +
<!--
 +
i0507801.png
 +
$#A+1 = 19 n = 0
 +
$#C+1 = 19 : ~/encyclopedia/old_files/data/I050/I.0500780 Inefficient statistic,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''inefficient estimator''
 
''inefficient estimator''
  
A [[Statistical estimator|statistical estimator]] whose variance is greater than that of an [[Efficient estimator|efficient estimator]]. In other words, for an inefficient estimator equality in the [[Rao–Cramér inequality|Rao–Cramér inequality]] is not attained for at least one value of the parameter to be estimated. A quantitative measure of inefficiency of an inefficient estimator is the number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i0507801.png" />, the so-called efficiency, which is the ratio of the variance of an efficient estimator to that of the statistic in question. The efficiency <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i0507802.png" /> is non-negative and does not exceed 1. The quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i0507803.png" /> indicates by how much one has to increase the number of observations in using an inefficient estimator as compared with an efficient estimator so as to achieve equivalent results in the application of the two statistics. For example, the median <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i0507804.png" /> of an empirical distribution constructed from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i0507805.png" /> independent normally <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i0507806.png" />-distributed random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i0507807.png" /> is asymptotically normally distributed with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i0507808.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i0507809.png" /> and is an inefficient [[Order statistic|order statistic]] estimating the expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i05078010.png" />. In this case an efficient estimator is given by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i05078011.png" />, which is distributed according to the normal law <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i05078012.png" />. The efficiency <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i05078013.png" /> of the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i05078014.png" /> is
+
A [[Statistical estimator|statistical estimator]] whose variance is greater than that of an [[Efficient estimator|efficient estimator]]. In other words, for an inefficient estimator equality in the [[Rao–Cramér inequality|Rao–Cramér inequality]] is not attained for at least one value of the parameter to be estimated. A quantitative measure of inefficiency of an inefficient estimator is the number $  e $,  
 +
the so-called efficiency, which is the ratio of the variance of an efficient estimator to that of the statistic in question. The efficiency $  e $
 +
is non-negative and does not exceed 1. The quantity $  1/ e $
 +
indicates by how much one has to increase the number of observations in using an inefficient estimator as compared with an efficient estimator so as to achieve equivalent results in the application of the two statistics. For example, the median $  \mu _ {n} $
 +
of an empirical distribution constructed from $  n $
 +
independent normally $  N ( \theta , \sigma  ^ {2} ) $-
 +
distributed random variables $  X _ {1} \dots X _ {n} $
 +
is asymptotically normally distributed with parameters $  \theta $
 +
and $  \sigma  ^ {2} \pi / 2n $
 +
and is an inefficient [[Order statistic|order statistic]] estimating the expectation $  \theta $.  
 +
In this case an efficient estimator is given by $  \overline{X}\; = ( X _ {1} + \dots + X _ {n} ) /n $,  
 +
which is distributed according to the normal law $  N ( \theta , \sigma  ^ {2} / n ) $.  
 +
The efficiency $  e $
 +
of the statistic $  \mu _ {n} $
 +
is
 +
 
 +
$$
 +
= \
 +
 
 +
\frac{ {\mathsf D} ( \overline{X}\; ) }{ {\mathsf D} ( \mu _ {n} ) }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i05078015.png" /></td> </tr></table>
+
=
 +
\frac{2} \pi
 +
.
 +
$$
  
Consequently, in the use of the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i05078016.png" /> one has to make on the average <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i05078017.png" /> more observations as compared with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i05078018.png" /> in order to obtain the same accuracy in the estimation of the unknown mathematical expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/i/i050/i050780/i05078019.png" />.
+
Consequently, in the use of the statistic $  \mu _ {n} $
 +
one has to make on the average $  \pi / 2 \approx 1.57 $
 +
more observations as compared with $  \overline{X}\; $
 +
in order to obtain the same accuracy in the estimation of the unknown mathematical expectation $  \theta $.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  H. Cramér,  "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  L.N. Bol'shev,  N.V. Smirnov,  "Tables of mathematical statistics" , ''Libr. math. tables'' , '''46''' , Nauka  (1983)  (In Russian)  (Processed by L.S. Bark and E.S. Kedrova)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  H. Cramér,  "Mathematical methods of statistics" , Princeton Univ. Press  (1946)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  L.N. Bol'shev,  N.V. Smirnov,  "Tables of mathematical statistics" , ''Libr. math. tables'' , '''46''' , Nauka  (1983)  (In Russian)  (Processed by L.S. Bark and E.S. Kedrova)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Latest revision as of 22:12, 5 June 2020


inefficient estimator

A statistical estimator whose variance is greater than that of an efficient estimator. In other words, for an inefficient estimator equality in the Rao–Cramér inequality is not attained for at least one value of the parameter to be estimated. A quantitative measure of inefficiency of an inefficient estimator is the number $ e $, the so-called efficiency, which is the ratio of the variance of an efficient estimator to that of the statistic in question. The efficiency $ e $ is non-negative and does not exceed 1. The quantity $ 1/ e $ indicates by how much one has to increase the number of observations in using an inefficient estimator as compared with an efficient estimator so as to achieve equivalent results in the application of the two statistics. For example, the median $ \mu _ {n} $ of an empirical distribution constructed from $ n $ independent normally $ N ( \theta , \sigma ^ {2} ) $- distributed random variables $ X _ {1} \dots X _ {n} $ is asymptotically normally distributed with parameters $ \theta $ and $ \sigma ^ {2} \pi / 2n $ and is an inefficient order statistic estimating the expectation $ \theta $. In this case an efficient estimator is given by $ \overline{X}\; = ( X _ {1} + \dots + X _ {n} ) /n $, which is distributed according to the normal law $ N ( \theta , \sigma ^ {2} / n ) $. The efficiency $ e $ of the statistic $ \mu _ {n} $ is

$$ e = \ \frac{ {\mathsf D} ( \overline{X}\; ) }{ {\mathsf D} ( \mu _ {n} ) } = \frac{2} \pi . $$

Consequently, in the use of the statistic $ \mu _ {n} $ one has to make on the average $ \pi / 2 \approx 1.57 $ more observations as compared with $ \overline{X}\; $ in order to obtain the same accuracy in the estimation of the unknown mathematical expectation $ \theta $.

References

[1] H. Cramér, "Mathematical methods of statistics" , Princeton Univ. Press (1946)
[2] L.N. Bol'shev, N.V. Smirnov, "Tables of mathematical statistics" , Libr. math. tables , 46 , Nauka (1983) (In Russian) (Processed by L.S. Bark and E.S. Kedrova)

Comments

There are also cases for which the minimal attainable variance of an estimator is larger than the Cramér–Rao bound, [a1].

References

[a1] C.R. Rao, "Linear statistical inference and its applications" , Wiley (1965) pp. 283
How to Cite This Entry:
Inefficient statistic. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Inefficient_statistic&oldid=15359
This article was adapted from an original article by M.S. Nikulin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article