Bayesian estimator
An estimator of an unknown parameter from the results of observations using the Bayesian approach. In such an approach to the problem of statistical estimation it is usually assumed that the unknown parameter $ \theta \in \Theta \subseteq \mathbf R ^ {k} $
is a random variable with given a priori distribution $ \pi = \pi (d \theta ) $,
that the space of decisions $ D $
is identical to the set $ \Theta $
and that the loss $ L ( \theta , d) $
expresses the deviation between the variable $ \theta $
and its estimator $ d $.
It is therefore supposed, as a rule, that the function $ L( \theta , d) $
has the form $ L ( \theta , d) = a ( \theta ) \lambda ( \theta - d) $,
where $ \lambda $
is some non-negative function of the error vector $ \theta - d $.
If $ k = 1 $,
it is often assumed that $ \lambda ( \theta - d) = | \theta - d | ^ \alpha $,
$ \alpha > 0 $;
the most useful and mathematically the most convenient is the quadratic loss function $ L ( \theta , d) = | \theta - d | ^ {2} $.
For such a loss function the Bayesian estimator (Bayesian decision function) $ \delta ^ {*} = \delta ^ {*} (x) $
is defined as the function for which the minimum total loss
$$ \inf _ \delta \rho ( \pi , \delta ) = \ \inf _ \delta \ \int\limits _ \Theta \int\limits _ { X } | \theta - \delta (x) | ^ {2} {\mathsf P} _ \theta \ (dx) \pi (d \theta ), $$
is attained, or, equivalently, for which the minimum conditional loss
$$ \inf _ \delta \ {\mathsf E} \{ [ \theta - \delta (x)] ^ {2} \mid x \} $$
is attained. It follows that in the case of a quadratic loss function the Bayesian estimator $ \delta ^ {*} (x) $ coincides with the a posteriori average $ \delta ^ {*} (x) = {\mathsf E} ( \theta \mid x) $, and the Bayes risk is
$$ \rho ( \pi , \delta ^ {*} ) = \ {\mathsf E} [ {\mathsf D} ( \theta \mid x)], $$
where $ {\mathsf D} ( \theta \mid x) $ is the variance of the a posteriori distribution:
$$ {\mathsf D} ( \theta \mid x) = \ {\mathsf E} \{ [ \theta - {\mathsf E} ( \theta \mid x)] ^ {2} \mid x \} . $$
Example. Let $ x = (x _ {1} \dots x _ {n} ) $, where $ x _ {1} \dots x _ {n} $ are independent identically-distributed random variables with normal distributions $ N ( \theta , \sigma ^ {2} ) $, $ \sigma ^ {2} $ is known, while the unknown parameter $ \theta $ has the normal distribution $ N ( \mu , \tau ^ {2} ) $. Since the a posteriori distribution for $ \theta $( where $ x $ is given) is normal $ N ( \mu _ {n} , \tau _ {n} ^ {2} ) $ with
$$ \mu _ {n} = \ \frac{n \overline{x}\; \sigma ^ {-2} + \mu \tau ^ {-2} }{n \sigma ^ {-2} + \tau ^ {-2} } ,\ \ \tau _ {n} ^ {-2} = n \sigma ^ {-2} + \tau ^ {-2} , $$
where $ \overline{x}\; = ( \overline{x}\; _ {1} + \dots + \overline{x}\; _ {n} ) / n $, it follows that for the quadratic loss function $ {| \theta - d | } ^ {2} $ the Bayesian estimator is $ \delta ^ {*} (x) = \mu _ {n} $, while the Bayesian risk is $ \tau _ {n} ^ {2} = \sigma ^ {2} \tau ^ {2} / (n \tau ^ {2} + \sigma ^ {2} ) $.
Comments
References
[a1] | E. Sverdrup, "Laws and chance variations" , 1 , North-Holland (1967) pp. Chapt. 6, Section 4 |
Bayesian estimator. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Bayesian_estimator&oldid=45999