Difference between revisions of "Unbiased estimator"
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
Ulf Rehmann (talk | contribs) m (Undo revision 49064 by Ulf Rehmann (talk)) Tag: Undo |
||
Line 1: | Line 1: | ||
− | < | + | A [[Statistical estimator|statistical estimator]] whose expectation is that of the quantity to be estimated. Suppose that in the realization of a random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u0950701.png" /> taking values in a probability space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u0950702.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u0950703.png" />, a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u0950704.png" /> has to be estimated, mapping the parameter set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u0950705.png" /> into a certain set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u0950706.png" />, and that as an estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u0950707.png" /> a statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u0950708.png" /> is chosen. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u0950709.png" /> is such that |
− | u0950701.png | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507010.png" /></td> </tr></table> | |
− | |||
− | + | holds for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507011.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507012.png" /> is called an unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507013.png" />. An unbiased estimator is frequently called free of systematic errors. | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | is called an unbiased estimator of | ||
− | An unbiased estimator is frequently called free of systematic errors. | ||
===Example 1.=== | ===Example 1.=== | ||
− | Let | + | Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507014.png" /> be random variables having the same expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507015.png" />, that is, |
− | be random variables having the same expectation | ||
− | that is, | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507016.png" /></td> </tr></table> | |
− | |||
− | |||
In that case the statistic | In that case the statistic | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507017.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | is an unbiased estimator of | + | is an unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507018.png" />. In particular, the arithmetic mean of the observations, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507019.png" />, is an unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507020.png" />. In this example <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507021.png" />. |
− | In particular, the arithmetic mean of the observations, | ||
− | is an unbiased estimator of | ||
− | In this example | ||
===Example 2.=== | ===Example 2.=== | ||
− | Let | + | Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507022.png" /> be independent random variables having the same probability law with distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507023.png" />, that is, |
− | be independent random variables having the same probability law with distribution function | ||
− | that is, | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507024.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | In this case the empirical distribution function | + | In this case the empirical distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507025.png" /> constructed from the observations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507026.png" /> is an unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507027.png" />, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507028.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507029.png" />. |
− | constructed from the observations | ||
− | is an unbiased estimator of | ||
− | that is, | ||
− | |||
===Example 3.=== | ===Example 3.=== | ||
− | Let | + | Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507030.png" /> be an unbiased estimator of a parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507031.png" />, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507032.png" />, and assume that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507033.png" /> is a linear function. In that case the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507034.png" /> is an unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507035.png" />. |
− | be an unbiased estimator of a parameter | ||
− | that is, | ||
− | and assume that | ||
− | is a linear function. In that case the statistic | ||
− | is an unbiased estimator of | ||
The next example shows that there are cases in which unbiased estimators exist and are even unique, but they may turn out to be useless. | The next example shows that there are cases in which unbiased estimators exist and are even unique, but they may turn out to be useless. | ||
===Example 4.=== | ===Example 4.=== | ||
− | Let | + | Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507036.png" /> be a random variable subject to the geometric distribution with parameter of success <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507037.png" />, that is, for any natural number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507038.png" />, |
− | be a random variable subject to the geometric distribution with parameter of success | ||
− | that is, for any natural number | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507039.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | If | + | If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507040.png" /> is an unbiased estimator of the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507041.png" />, it must satisfy the unbiasedness equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507042.png" />, that is, |
− | is an unbiased estimator of the parameter | ||
− | it must satisfy the unbiasedness equation | ||
− | that is, | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507043.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
The unique solution of this equation is | The unique solution of this equation is | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507044.png" /></td> </tr></table> | |
− | |||
− | |||
− | Evidently, | + | Evidently, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507045.png" /> is good only when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507046.png" /> is very close to 1 or 0, otherwise <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507047.png" /> carries no useful information on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507048.png" />. |
− | is good only when | ||
− | is very close to 1 or 0, otherwise | ||
− | carries no useful information on | ||
===Example 5.=== | ===Example 5.=== | ||
− | Suppose that a random variable | + | Suppose that a random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507049.png" /> has the binomial law with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507050.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507051.png" />, that is, for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507052.png" />, |
− | has the binomial law with parameters | ||
− | and | ||
− | that is, for any | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507053.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | It is known that the best unbiased estimator of the parameter | + | It is known that the best unbiased estimator of the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507054.png" /> (in the sense of minimum quadratic risk) is the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507055.png" />. Nevertheless, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507056.png" /> is irrational, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507057.png" />. This example reflects a general property of random variables that, generally speaking, a random variable need not take values that agree with its expectation. And finally, cases are possible when unbiased estimators do not exist at all. Thus, if under the conditions of Example 5 one takes as the function to be estimated <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507058.png" />, then (see Example 6) there is no unbiased estimator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507059.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507060.png" />. |
− | in the sense of minimum quadratic risk) is the statistic | ||
− | Nevertheless, if | ||
− | is irrational, | ||
− | This example reflects a general property of random variables that, generally speaking, a random variable need not take values that agree with its expectation. And finally, cases are possible when unbiased estimators do not exist at all. Thus, if under the conditions of Example 5 one takes as the function to be estimated | ||
− | then (see Example 6) there is no unbiased estimator | ||
− | for | ||
− | The preceding examples demonstrate that the concept of an unbiased estimator in its very nature does not necessarily help an experimenter to avoid all the complications that arise in the construction of statistical estimators, since an unbiased estimator may turn out to be very good and even totally useless; it may not be unique or may not exist at all. Moreover, an unbiased estimator, like every point estimator, also has the following deficiency. It only gives an approximate value for the true value of the quantity to be estimated; this quantity was not known before the experiment and remains unknown after it has been performed. So, in the problem of constructing statistical point estimators there is no serious justification for the fact that in all cases they should produce the resulting unbiased estimator, unless it is assumed that the study of unbiased estimators leads to a simple priority theory. For example, the [[Rao–Cramér inequality|Rao–Cramér inequality]] has a simple form for unbiased estimators. Namely, if | + | The preceding examples demonstrate that the concept of an unbiased estimator in its very nature does not necessarily help an experimenter to avoid all the complications that arise in the construction of statistical estimators, since an unbiased estimator may turn out to be very good and even totally useless; it may not be unique or may not exist at all. Moreover, an unbiased estimator, like every point estimator, also has the following deficiency. It only gives an approximate value for the true value of the quantity to be estimated; this quantity was not known before the experiment and remains unknown after it has been performed. So, in the problem of constructing statistical point estimators there is no serious justification for the fact that in all cases they should produce the resulting unbiased estimator, unless it is assumed that the study of unbiased estimators leads to a simple priority theory. For example, the [[Rao–Cramér inequality|Rao–Cramér inequality]] has a simple form for unbiased estimators. Namely, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507061.png" /> is an unbiased estimator for a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507062.png" />, then under fairly broad conditions of regularity on the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507063.png" /> and the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507064.png" />, the Rao–Cramér inequality implies that |
− | is an unbiased estimator for a function | ||
− | then under fairly broad conditions of regularity on the family | ||
− | and the function | ||
− | the Rao–Cramér inequality implies that | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507065.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507066.png" /> is the [[Fisher amount of information|Fisher amount of information]] for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507067.png" />. Thus, there is a lower bound for the variance of an unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507068.png" />, namely, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507069.png" />. In particular, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507070.png" />, then it follows from (1) that | |
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507071.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | A statistical estimator for which equality is attained in the Rao–Cramér inequality is called efficient (cf. [[Efficient estimator|Efficient estimator]]). Thus, the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507072.png" /> in Example 5 is an efficient unbiased estimator of the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507073.png" /> of the binomial law, since | |
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507074.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
and | and | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507075.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | that is, | + | that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507076.png" /> is the best point estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507077.png" /> in the sense of minimum quadratic risk in the class of all unbiased estimators. |
− | is the best point estimator of | ||
− | in the sense of minimum quadratic risk in the class of all unbiased estimators. | ||
− | Naturally, an experimenter is interested in the case when the class of unbiased estimators is rich enough to allow the choice of the best unbiased estimator in some sense. In this context an important role is played by the [[Rao–Blackwell–Kolmogorov theorem|Rao–Blackwell–Kolmogorov theorem]], which allows one to construct an unbiased estimator of minimal variance. This theorem asserts that if the family | + | Naturally, an experimenter is interested in the case when the class of unbiased estimators is rich enough to allow the choice of the best unbiased estimator in some sense. In this context an important role is played by the [[Rao–Blackwell–Kolmogorov theorem|Rao–Blackwell–Kolmogorov theorem]], which allows one to construct an unbiased estimator of minimal variance. This theorem asserts that if the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507078.png" /> has a [[Sufficient statistic|sufficient statistic]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507079.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507080.png" /> is an arbitrary unbiased estimator of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507081.png" />, then the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507082.png" /> obtained by averaging <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507083.png" /> over the fixed sufficient statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507084.png" /> has a risk not exceeding that of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507085.png" /> relative to any convex loss function for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507086.png" />. If the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507087.png" /> is complete, the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507088.png" /> is uniquely determined. That is, the Rao–Blackwell–Kolmogorov theorem implies that unbiased estimators must be looked for in terms of sufficient statistics, if they exist. The practical value of the Rao–Blackwell–Kolmogorov theorem lies in the fact that it gives a recipe for constructing best unbiased estimators, namely: One has to construct an arbitrary unbiased estimator and then average it over a sufficient statistic. |
− | has a [[Sufficient statistic|sufficient statistic]] | ||
− | and | ||
− | is an arbitrary unbiased estimator of a function | ||
− | then the statistic | ||
− | obtained by averaging | ||
− | over the fixed sufficient statistic | ||
− | has a risk not exceeding that of | ||
− | relative to any convex loss function for all | ||
− | If the family | ||
− | is complete, the statistic | ||
− | is uniquely determined. That is, the Rao–Blackwell–Kolmogorov theorem implies that unbiased estimators must be looked for in terms of sufficient statistics, if they exist. The practical value of the Rao–Blackwell–Kolmogorov theorem lies in the fact that it gives a recipe for constructing best unbiased estimators, namely: One has to construct an arbitrary unbiased estimator and then average it over a sufficient statistic. | ||
===Example 6.=== | ===Example 6.=== | ||
− | Suppose that a random variable | + | Suppose that a random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507089.png" /> has the Pascal distribution (a negative binomial distribution) with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507090.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507091.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507092.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507093.png" />); that is, |
− | has the Pascal distribution (a negative binomial distribution) with parameters | ||
− | and | ||
− | |||
− | |||
− | that is, | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507094.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | In this case the statistic | + | In this case the statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507095.png" /> is an unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507096.png" />. Since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507097.png" /> is expressed in terms of the sufficient statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507098.png" /> and the system of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u09507099.png" /> is complete on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070100.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070101.png" /> is the only unbiased estimator and, consequently, the best estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070102.png" />. |
− | is an unbiased estimator of | ||
− | Since | ||
− | is expressed in terms of the sufficient statistic | ||
− | and the system of functions | ||
− | is complete on | ||
− | |||
− | is the only unbiased estimator and, consequently, the best estimator of | ||
===Example 7.=== | ===Example 7.=== | ||
− | Let | + | Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070103.png" /> be a random variable having the binomial law with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070104.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070105.png" />. The generating function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070106.png" /> of this law can be expressed by the formula |
− | be a random variable having the binomial law with parameters | ||
− | and | ||
− | The generating function | ||
− | of this law can be expressed by the formula | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070107.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | which implies that for any integer | + | which implies that for any integer <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070108.png" />, the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070109.png" />-th derivative |
− | the | ||
− | th derivative | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070110.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070111.png" /></td> </tr></table> | |
− | = | ||
− | |||
− | |||
On the other hand, | On the other hand, | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070112.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
Hence, | Hence, | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070113.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
that is, the statistic | that is, the statistic | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070114.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table> | |
− | |||
− | + | is an unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070115.png" />, and since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070116.png" /> is expressed in terms of the sufficient statistic <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070117.png" /> and the system of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070118.png" /> is complete on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070119.png" />, it follows that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070120.png" /> is the only, hence the best, unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070121.png" />. | |
− | |||
− | |||
− | + | In connection with this example the following question arises: What functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070122.png" /> of the parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070123.png" /> admit an unbiased estimator? A.N. Kolmogorov [[#References|[1]]] has shown that this only happens for polynomials of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070124.png" />. Thus, if | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070125.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
then it follows from (2) that the statistic | then it follows from (2) that the statistic | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070126.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | is the only unbiased estimator of | + | is the only unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070127.png" />. This result implies, in particular, that there is no unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070128.png" />. |
− | This result implies, in particular, that there is no unbiased estimator of | ||
===Example 8.=== | ===Example 8.=== | ||
− | Let | + | Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070129.png" /> be a random variable subject to the Poisson law with parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070130.png" />; that is, for any integer <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070131.png" /> |
− | be a random variable subject to the Poisson law with parameter | ||
− | that is, for any integer | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070132.png" /></td> </tr></table> | |
− | |||
− | |||
− | Since | + | Since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070133.png" />, the observation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070134.png" /> by itself is an unbiased estimator of its mathematical expectation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070135.png" />. In turn, an unbiased estimator of, say, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070136.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070137.png" />. More generally, the statistic |
− | the observation of | ||
− | by itself is an unbiased estimator of its mathematical expectation | ||
− | In turn, an unbiased estimator of, say, | ||
− | is | ||
− | More generally, the statistic | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070138.png" /></td> </tr></table> | |
− | |||
− | |||
− | is an unbiased estimator of | + | is an unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070139.png" />. This fact implies, in particular, that the statistic |
− | This fact implies, in particular, that the statistic | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070140.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | is an unbiased estimator of the function | + | is an unbiased estimator of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070141.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070142.png" />. Quite generally, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070143.png" /> admits an unbiased estimator, then the unbiasedness equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070144.png" /> must hold for it, which is equivalent to |
− | |||
− | Quite generally, if | ||
− | admits an unbiased estimator, then the unbiasedness equation | ||
− | must hold for it, which is equivalent to | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070145.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | From this one deduces that an unbiased estimator exists for any function | + | From this one deduces that an unbiased estimator exists for any function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070146.png" /> that admits a power series expansion in its domain of definition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070147.png" />. |
− | that admits a power series expansion in its domain of definition | ||
===Example 9.=== | ===Example 9.=== | ||
− | Suppose that the independent random variables | + | Suppose that the independent random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070148.png" /> have the same Poisson law with parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070149.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070150.png" />. The generating function of this law, which can be expressed by the formula |
− | have the same Poisson law with parameter | ||
− | |||
− | The generating function of this law, which can be expressed by the formula | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070151.png" /></td> </tr></table> | |
− | |||
− | |||
− | is an entire analytic function and hence has a unique unbiased estimator. In this case a sufficient statistic is | + | is an entire analytic function and hence has a unique unbiased estimator. In this case a sufficient statistic is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070152.png" />, which has the Poisson law with parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070153.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070154.png" /> is an unbiased estimator of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070155.png" />, then it must satisfy the unbiasedness equation |
− | which has the Poisson law with parameter | ||
− | If | ||
− | is an unbiased estimator of | ||
− | then it must satisfy the unbiasedness equation | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070156.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
which implies that | which implies that | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070157.png" /></td> </tr></table> | |
− | |||
− | |||
− | that is, an unbiased estimator of the generating function of the Poisson law is the generating function of the binomial law with parameters | + | that is, an unbiased estimator of the generating function of the Poisson law is the generating function of the binomial law with parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070158.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070159.png" />. |
− | and | ||
− | Examples 6–9 demonstrate that in certain cases, which occur quite frequently in practice, the problem of constructing best estimators is easily solvable, provided that one restricts attention to the class of unbiased estimators. Kolmogorov [[#References|[1]]] has considered the problem of constructing unbiased estimators, in particular, for the distribution function of a normal law with unknown parameters. A more general definition of an unbiased estimator is due to E. Lehmann [[#References|[2]]], according to whom a statistical estimator | + | Examples 6–9 demonstrate that in certain cases, which occur quite frequently in practice, the problem of constructing best estimators is easily solvable, provided that one restricts attention to the class of unbiased estimators. Kolmogorov [[#References|[1]]] has considered the problem of constructing unbiased estimators, in particular, for the distribution function of a normal law with unknown parameters. A more general definition of an unbiased estimator is due to E. Lehmann [[#References|[2]]], according to whom a statistical estimator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070160.png" /> of a parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070161.png" /> is called unbiased relative to a loss function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070162.png" /> if |
− | of a parameter | ||
− | is called unbiased relative to a loss function | ||
− | if | ||
− | + | <table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/u/u095/u095070/u095070163.png" /></td> </tr></table> | |
− | |||
− | |||
− | |||
− | |||
− | |||
There is also a modification of this definition (see [[#References|[3]]]). Yu.V. Linnik and his students (see [[#References|[4]]]) have established that under fairly wide assumptions the best unbiased estimator is independent of the loss function. | There is also a modification of this definition (see [[#References|[3]]]). Yu.V. Linnik and his students (see [[#References|[4]]]) have established that under fairly wide assumptions the best unbiased estimator is independent of the loss function. |
Revision as of 14:54, 7 June 2020
A statistical estimator whose expectation is that of the quantity to be estimated. Suppose that in the realization of a random variable taking values in a probability space , , a function has to be estimated, mapping the parameter set into a certain set , and that as an estimator of a statistic is chosen. If is such that
holds for , then is called an unbiased estimator of . An unbiased estimator is frequently called free of systematic errors.
Example 1.
Let be random variables having the same expectation , that is,
In that case the statistic
is an unbiased estimator of . In particular, the arithmetic mean of the observations, , is an unbiased estimator of . In this example .
Example 2.
Let be independent random variables having the same probability law with distribution function , that is,
In this case the empirical distribution function constructed from the observations is an unbiased estimator of , that is, , .
Example 3.
Let be an unbiased estimator of a parameter , that is, , and assume that is a linear function. In that case the statistic is an unbiased estimator of .
The next example shows that there are cases in which unbiased estimators exist and are even unique, but they may turn out to be useless.
Example 4.
Let be a random variable subject to the geometric distribution with parameter of success , that is, for any natural number ,
If is an unbiased estimator of the parameter , it must satisfy the unbiasedness equation , that is,
The unique solution of this equation is
Evidently, is good only when is very close to 1 or 0, otherwise carries no useful information on .
Example 5.
Suppose that a random variable has the binomial law with parameters and , that is, for any ,
It is known that the best unbiased estimator of the parameter (in the sense of minimum quadratic risk) is the statistic . Nevertheless, if is irrational, . This example reflects a general property of random variables that, generally speaking, a random variable need not take values that agree with its expectation. And finally, cases are possible when unbiased estimators do not exist at all. Thus, if under the conditions of Example 5 one takes as the function to be estimated , then (see Example 6) there is no unbiased estimator for .
The preceding examples demonstrate that the concept of an unbiased estimator in its very nature does not necessarily help an experimenter to avoid all the complications that arise in the construction of statistical estimators, since an unbiased estimator may turn out to be very good and even totally useless; it may not be unique or may not exist at all. Moreover, an unbiased estimator, like every point estimator, also has the following deficiency. It only gives an approximate value for the true value of the quantity to be estimated; this quantity was not known before the experiment and remains unknown after it has been performed. So, in the problem of constructing statistical point estimators there is no serious justification for the fact that in all cases they should produce the resulting unbiased estimator, unless it is assumed that the study of unbiased estimators leads to a simple priority theory. For example, the Rao–Cramér inequality has a simple form for unbiased estimators. Namely, if is an unbiased estimator for a function , then under fairly broad conditions of regularity on the family and the function , the Rao–Cramér inequality implies that
(1) |
where is the Fisher amount of information for . Thus, there is a lower bound for the variance of an unbiased estimator of , namely, . In particular, if , then it follows from (1) that
A statistical estimator for which equality is attained in the Rao–Cramér inequality is called efficient (cf. Efficient estimator). Thus, the statistic in Example 5 is an efficient unbiased estimator of the parameter of the binomial law, since
and
that is, is the best point estimator of in the sense of minimum quadratic risk in the class of all unbiased estimators.
Naturally, an experimenter is interested in the case when the class of unbiased estimators is rich enough to allow the choice of the best unbiased estimator in some sense. In this context an important role is played by the Rao–Blackwell–Kolmogorov theorem, which allows one to construct an unbiased estimator of minimal variance. This theorem asserts that if the family has a sufficient statistic and is an arbitrary unbiased estimator of a function , then the statistic obtained by averaging over the fixed sufficient statistic has a risk not exceeding that of relative to any convex loss function for all . If the family is complete, the statistic is uniquely determined. That is, the Rao–Blackwell–Kolmogorov theorem implies that unbiased estimators must be looked for in terms of sufficient statistics, if they exist. The practical value of the Rao–Blackwell–Kolmogorov theorem lies in the fact that it gives a recipe for constructing best unbiased estimators, namely: One has to construct an arbitrary unbiased estimator and then average it over a sufficient statistic.
Example 6.
Suppose that a random variable has the Pascal distribution (a negative binomial distribution) with parameters and (, ); that is,
In this case the statistic is an unbiased estimator of . Since is expressed in terms of the sufficient statistic and the system of functions is complete on , is the only unbiased estimator and, consequently, the best estimator of .
Example 7.
Let be a random variable having the binomial law with parameters and . The generating function of this law can be expressed by the formula
which implies that for any integer , the -th derivative
On the other hand,
Hence,
that is, the statistic
(2) |
is an unbiased estimator of , and since is expressed in terms of the sufficient statistic and the system of functions is complete on , it follows that is the only, hence the best, unbiased estimator of .
In connection with this example the following question arises: What functions of the parameter admit an unbiased estimator? A.N. Kolmogorov [1] has shown that this only happens for polynomials of degree . Thus, if
then it follows from (2) that the statistic
is the only unbiased estimator of . This result implies, in particular, that there is no unbiased estimator of .
Example 8.
Let be a random variable subject to the Poisson law with parameter ; that is, for any integer
Since , the observation of by itself is an unbiased estimator of its mathematical expectation . In turn, an unbiased estimator of, say, is . More generally, the statistic
is an unbiased estimator of . This fact implies, in particular, that the statistic
is an unbiased estimator of the function , . Quite generally, if admits an unbiased estimator, then the unbiasedness equation must hold for it, which is equivalent to
From this one deduces that an unbiased estimator exists for any function that admits a power series expansion in its domain of definition .
Example 9.
Suppose that the independent random variables have the same Poisson law with parameter , . The generating function of this law, which can be expressed by the formula
is an entire analytic function and hence has a unique unbiased estimator. In this case a sufficient statistic is , which has the Poisson law with parameter . If is an unbiased estimator of , then it must satisfy the unbiasedness equation
which implies that
that is, an unbiased estimator of the generating function of the Poisson law is the generating function of the binomial law with parameters and .
Examples 6–9 demonstrate that in certain cases, which occur quite frequently in practice, the problem of constructing best estimators is easily solvable, provided that one restricts attention to the class of unbiased estimators. Kolmogorov [1] has considered the problem of constructing unbiased estimators, in particular, for the distribution function of a normal law with unknown parameters. A more general definition of an unbiased estimator is due to E. Lehmann [2], according to whom a statistical estimator of a parameter is called unbiased relative to a loss function if
There is also a modification of this definition (see [3]). Yu.V. Linnik and his students (see [4]) have established that under fairly wide assumptions the best unbiased estimator is independent of the loss function.
References
[1] | A.N. Kolmogorov, "Unbiased estimates" Izv. Akad. Nauk SSSR Ser. Mat. , 14 : 4 (1950) pp. 303–326 (In Russian) |
[2] | E.L. Lehmann, "Testing statistical hypotheses" , Wiley (1959) |
[3] | L.B. Klebanov, "A general definition of unbiasedness" Theor. Probab. Appl. , 21 : 3 (1976) pp. 571–585 Teor. Veroyatnost. i. Primenen. , 21 : 3 (1976) pp. 584–598 |
[4] | L.B. Klebanov, Yu.V. Linnik, A.L. Rukhin, "Unbiased estimation and matrix loss functions" Soviet Math. Dokl. , 12 : 5 (1971) pp. 1526–1528 Dokl. Akad. Nauk SSSR , 200 : 5 (1971) pp. 1024–1025 |
[5] | S. Zacks, "The theory of statistical inference" , Wiley (1971) |
Unbiased estimator. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Unbiased_estimator&oldid=49486