Difference between revisions of "A posteriori distribution"
(Importing text file) |
(TeX) |
||
Line 1: | Line 1: | ||
+ | {{TEX|done}} | ||
A conditional probability distribution of a random variable, to be contrasted with its unconditional or [[A priori distribution|a priori distribution]]. | A conditional probability distribution of a random variable, to be contrasted with its unconditional or [[A priori distribution|a priori distribution]]. | ||
− | Let | + | Let $\Theta$ be a random parameter with an a priori density $p(\theta)$, let $X$ be a random result of observations and let $p(x|\theta)$ be the conditional density of $X$ when $\Theta=\theta$; then the a posteriori distribution of $\Theta$ for a given $X=x$, according to the [[Bayes formula|Bayes formula]], has the density |
− | + | $$p(\theta|x)=\frac{p(\theta)p(x|\theta)}{\int\limits_{-\infty}^\infty p(\theta)p(x|\theta)d\theta}.$$ | |
− | If | + | If $T(x)$ is a [[Sufficient statistic|sufficient statistic]] for the family of distributions with densities $p(x|\theta)$, then the a posteriori distribution depends not on $x$ itself, but on $T(x)$. The asymptotic behaviour of the a posteriori distribution $p(\theta|x_1,\dots,x_n)$ as $n\to\infty$, where $x_j$ are the results of independent observations with density $p(x|\theta_0)$, is "almost independent" of the a priori distribution of $\Theta$. |
For the role played by a posteriori distributions in the theory of statistical decisions, see [[Bayesian approach|Bayesian approach]]. | For the role played by a posteriori distributions in the theory of statistical decisions, see [[Bayesian approach|Bayesian approach]]. |
Revision as of 13:46, 30 September 2014
A conditional probability distribution of a random variable, to be contrasted with its unconditional or a priori distribution.
Let $\Theta$ be a random parameter with an a priori density $p(\theta)$, let $X$ be a random result of observations and let $p(x|\theta)$ be the conditional density of $X$ when $\Theta=\theta$; then the a posteriori distribution of $\Theta$ for a given $X=x$, according to the Bayes formula, has the density
$$p(\theta|x)=\frac{p(\theta)p(x|\theta)}{\int\limits_{-\infty}^\infty p(\theta)p(x|\theta)d\theta}.$$
If $T(x)$ is a sufficient statistic for the family of distributions with densities $p(x|\theta)$, then the a posteriori distribution depends not on $x$ itself, but on $T(x)$. The asymptotic behaviour of the a posteriori distribution $p(\theta|x_1,\dots,x_n)$ as $n\to\infty$, where $x_j$ are the results of independent observations with density $p(x|\theta_0)$, is "almost independent" of the a priori distribution of $\Theta$.
For the role played by a posteriori distributions in the theory of statistical decisions, see Bayesian approach.
References
[1] | S.N. Bernshtein, "Probability theory" , Moscow-Leningrad (1946) (In Russian) |
Comments
References
[a1] | E. Sverdrup, "Laws and chance variations" , 1 , North-Holland (1967) pp. 214ff |
A posteriori distribution. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=A_posteriori_distribution&oldid=33451