# A posteriori distribution

A conditional probability distribution of a random variable, to be contrasted with its unconditional or a priori distribution.

Let $\Theta$ be a random parameter with an a priori density $p(\theta)$, let $X$ be a random result of observations and let $p(x|\theta)$ be the conditional density of $X$ when $\Theta=\theta$; then the a posteriori distribution of $\Theta$ for a given $X=x$, according to the Bayes formula, has the density

$$p(\theta|x)=\frac{p(\theta)p(x|\theta)}{\int\limits_{-\infty}^\infty p(\theta)p(x|\theta)d\theta}.$$

If $T(x)$ is a sufficient statistic for the family of distributions with densities $p(x|\theta)$, then the a posteriori distribution depends not on $x$ itself, but on $T(x)$. The asymptotic behaviour of the a posteriori distribution $p(\theta|x_1,\dots,x_n)$ as $n\to\infty$, where $x_j$ are the results of independent observations with density $p(x|\theta_0)$, is "almost independent" of the a priori distribution of $\Theta$.

For the role played by a posteriori distributions in the theory of statistical decisions, see Bayesian approach.

#### References

 [1] S.N. Bernshtein, "Probability theory" , Moscow-Leningrad (1946) (In Russian)