Difference between revisions of "A posteriori distribution"
(Importing text file) |
m |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
+ | {{TEX|done}} | ||
A conditional probability distribution of a random variable, to be contrasted with its unconditional or [[A priori distribution|a priori distribution]]. | A conditional probability distribution of a random variable, to be contrasted with its unconditional or [[A priori distribution|a priori distribution]]. | ||
− | Let | + | Let $\Theta$ be a random parameter with an a priori density $p(\theta)$, let $X$ be a random result of observations and let $p(x\mid\theta)$ be the conditional density of $X$ when $\Theta=\theta$; then the a posteriori distribution of $\Theta$ for a given $X=x$, according to the [[Bayes formula|Bayes formula]], has the density |
− | + | $$p(\theta\mid x)=\frac{p(\theta)p(x\mid\theta)}{\int\limits_{-\infty}^\infty p(\theta)p(x\mid\theta)\,d\theta}.$$ | |
− | If | + | If $T(x)$ is a [[Sufficient statistic|sufficient statistic]] for the family of distributions with densities $p(x\mid\theta)$, then the a posteriori distribution depends not on $x$ itself, but on $T(x)$. The asymptotic behaviour of the a posteriori distribution $p(\theta\mid x_1,\dots,x_n)$ as $n\to\infty$, where $x_j$ are the results of independent observations with density $p(x\mid\theta_0)$, is "almost independent" of the a priori distribution of $\Theta$. |
− | For the role played by a posteriori distributions in the theory | + | For the role played by a posteriori distributions in the [[statistical decision theory]], see [[Bayesian approach|Bayesian approach]]. |
====References==== | ====References==== |
Latest revision as of 21:34, 1 January 2019
A conditional probability distribution of a random variable, to be contrasted with its unconditional or a priori distribution.
Let $\Theta$ be a random parameter with an a priori density $p(\theta)$, let $X$ be a random result of observations and let $p(x\mid\theta)$ be the conditional density of $X$ when $\Theta=\theta$; then the a posteriori distribution of $\Theta$ for a given $X=x$, according to the Bayes formula, has the density
$$p(\theta\mid x)=\frac{p(\theta)p(x\mid\theta)}{\int\limits_{-\infty}^\infty p(\theta)p(x\mid\theta)\,d\theta}.$$
If $T(x)$ is a sufficient statistic for the family of distributions with densities $p(x\mid\theta)$, then the a posteriori distribution depends not on $x$ itself, but on $T(x)$. The asymptotic behaviour of the a posteriori distribution $p(\theta\mid x_1,\dots,x_n)$ as $n\to\infty$, where $x_j$ are the results of independent observations with density $p(x\mid\theta_0)$, is "almost independent" of the a priori distribution of $\Theta$.
For the role played by a posteriori distributions in the statistical decision theory, see Bayesian approach.
References
[1] | S.N. Bernshtein, "Probability theory" , Moscow-Leningrad (1946) (In Russian) |
Comments
References
[a1] | E. Sverdrup, "Laws and chance variations" , 1 , North-Holland (1967) pp. 214ff |
A posteriori distribution. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=A_posteriori_distribution&oldid=11777