Namespaces
Variants
Actions

Density of a probability distribution

From Encyclopedia of Mathematics
Jump to: navigation, search


probability density

The derivative of the distribution function corresponding to an absolutely-continuous probability measure.

Let be a random vector taking values in an n - dimensional Euclidean space \mathbf R ^ {n} ( n \geq 1) , let F be its distribution function, and let there exist a non-negative function f such that

F( x _ {1} \dots x _ {n} ) = \int\limits _ { - \infty } ^ { {x _ 1 } } \dots \int\limits _ { - \infty } ^ { {x _ n } } f( u _ {1} \dots u _ {n} ) du _ {1} \dots du _ {n}

for any real x _ {1} \dots x _ {n} . Then f is called the probability density of X , and for any Borel set A\subset \mathbf R ^ {n} ,

{\mathsf P} \{ X \in A \} = {\int\limits \dots \int\limits } _ { A } f( u _ {1} \dots u _ {n} ) du _ {1} {} \dots du _ {n} .

Any non-negative integrable function f satisfy the condition

\int\limits _ {- \infty } ^ \infty \dots \int\limits _ {- \infty } ^ \infty f( x _ {1} \dots x _ {n} ) dx _ {1} \dots dx _ {n} = 1

is the probability density of some random vector.

If two random vectors X and Y taking values in \mathbf R ^ {n} are independent and have probability densities f and g respectively, then the random vector X+ Y has the probability density h that is the convolution of f and g :

h( x _ {1} \dots x _ {n} ) =

= \ \int\limits _ {- \infty } ^ \infty \dots \int\limits _ {- \infty } ^ \infty f( x _ {1} - u _ {1} \dots x _ {n} - u _ {n} ) g( u _ {1} \dots u _ {n} ) \times

\times du _ {1} \dots du _ {n\ } =

= \ \int\limits _ {- \infty } ^ \infty \dots \int\limits _ {- \infty } ^ \infty f( u _ {1} \dots u _ {n} ) g( x _ {1} - u _ {1} \dots x _ {n} - u _ {n} ) \times

\times \ du _ {1} \dots du _ {n} .

Let X = ( X _ {1} \dots X _ {n} ) and Y = ( Y _ {1} \dots Y _ {m} ) be random vectors taking values in \mathbf R ^ {n} and \mathbf R ^ {m} ( n, m \geq 1) and having probability densities f and g respectively, and let Z = ( X _ {1} \dots X _ {n} , Y _ {1} \dots Y _ {m} ) be a random vector in \mathbf R ^ {n+} m . If then X and Y are independent, Z has the probability density h , which is called the joint probability density of the random vectors X and Y , where

\tag{1 } h( t _ {1} \dots t _ {n+} m ) = f( t _ {1} \dots t _ {n} ) g( t _ {n+} 1 \dots t _ {n+} m ).

Conversely, if Z has a probability density that satisfies (1), then X and Y are independent.

The characteristic function \phi of a random vector X having a probability density f is expressed by

\phi ( t _ {1} \dots t _ {n} ) =

= \ \int\limits _ {- \infty } ^ \infty \dots \int\limits _ {- \infty } ^ \infty e ^ {i( t _ {1} x _ {1} + \dots + t _ {n} x _ {n} ) } f( x _ {1} \dots x _ {n} ) dx _ {1} \dots dx _ {n} ,

where if \phi is absolutely integrable then f is a bounded continuous function, and

f( x _ {1} \dots x _ {n} ) =

= \ \frac{1}{( 2 \pi ) ^ {n} } \int\limits _ {- \infty } ^ \infty \dots \int\limits _ {- \infty } ^ \infty e ^ {- i( t _ {1} x _ {1} + \dots + t _ {n} x _ {n} ) } \phi ( t _ {1} \dots t _ {n} ) dt _ {1} \dots dt _ {n} .

The probability density f and the corresponding characteristic function \phi are related also by the following relation (Plancherel's identity): The function f ^ { 2 } is integrable if and only if the function | \phi | ^ {2} is integrable, and in that case

\int\limits _ {- \infty } ^ \infty \dots \int\limits _ {- \infty } ^ \infty f ^ { 2 } ( x _ {1} \dots x _ {n} ) dx _ {1} \dots dx _ {n\ } =

= \ \frac{1}{( 2 \pi ) ^ {n} } \int\limits _ {- \infty } ^ \infty \dots \int\limits _ {- \infty } ^ \infty | \phi ( t _ {1} \dots t _ {n} ) | ^ {2} dt _ {1} \dots dt _ {n} .

Let ( \Omega , \mathfrak A) be a measurable space, and let \nu and \mu be \sigma - finite measures on ( \Omega , \mathfrak A) with \nu absolutely continuous with respect to \mu , i.e. \mu ( A) = 0 implies \nu ( A) = 0 , A \in \mathfrak A . In that case there exists on ( \Omega , \mathfrak A) a non-negative measurable function f such that

\nu ( A) = \int\limits _ { A } f d \mu

for any A \in \mathfrak A . The function f is called the Radon–Nikodým derivative of \nu with respect to \mu , while if \nu is a probability measure, it is also the probability density of \nu relative to \mu .

A concept closely related to the probability density is that of a dominated family of distributions. A family of probability distributions \mathfrak P on a measurable space ( \Omega , \mathfrak A) is called dominated if there exists a \sigma - finite measure \mu on ( \Omega , \mathfrak A) such that each probability measure from \mathfrak P has a probability density relative to \mu ( or, what is the same, if each measure from \mathfrak P is absolutely continuous with respect to \mu ). The assumption of dominance is important in certain theorems in mathematical statistics.

References

[1] Yu.V. [Yu.V. Prokhorov] Prohorov, Yu.A. Rozanov, "Probability theory, basic concepts. Limit theorems, random processes", Springer (1969) (Translated from Russian)
[2] W. Feller, "An introduction to probability theory and its applications", 2, Wiley (1971)
[3] E.L. Lehmann, "Testing statistical hypotheses", Wiley (1986)
How to Cite This Entry:
Density of a probability distribution. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Density_of_a_probability_distribution&oldid=46629
This article was adapted from an original article by N.G. Ushakov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article