Namespaces
Variants
Actions

Random variable

From Encyclopedia of Mathematics
Jump to: navigation, search


2020 Mathematics Subject Classification: Primary: 60-01 [MSN][ZBL]

One of the basic concepts in probability theory. The role of random variables and their expectations was clearly pointed out by P.L. Chebyshev (1867; see [C]). The realization that the concept of a random variable is a special case of the general concept of a measurable function came much later. A full exposition, free from any superfluous restrictions, of the basics of probability theory in a measure-theoretical setting was first given by A.N. Kolmogorov (1933; see [Ko]). This made it clear that a random variable is nothing but a measurable function on a probability space. This has to be clearly stated, even in an elementary exposition on probability theory. In the academic literature this point of view was adopted by W. Feller (see the foreword to [F], where the exposition is based on the concept of a space of elementary events, and where it is stressed that only in this context the notion of a random variable becomes meaningful).

Let $(\Omega,\mathcal{A},P)$ be a probability space. A single-valued real-valued function $X=X(\omega)$ defined on $\Omega$ is called a random variable if for any real $x$ the set $\{\omega\colon X(\omega)<x\}$ belongs to the class $\mathcal{A}$. Let $X$ be any random variable and $\mathcal{A}_X$ the class of subsets $C\subset\mathbf{R}^1$ for which $\{\omega\colon X(\omega)\in C\}\in\mathcal{A}$; this is a $\sigma$-algebra. The class $\mathcal{B}_1$ of all Borel subsets of $\mathbf{R}^1$ is always contained in $\mathcal{A}_X$. The measure $P_X$ defined on $\mathcal{B}_1$ by the equation $P_X(B)=P\{X(\omega)\in B\}$, $B\in\mathcal{B}_1$, is called the probability distribution of $X$. This measure is uniquely determined by the distribution function of $X$: \begin{equation} F_X(x)=P_X\{-\infty,x\}=P\{\omega\colon X(\omega)<x\}. \end{equation} The values $P\{\omega\colon X(\omega)\in C\}$ for $C\in\mathcal{A}_X$ (that is, the values of a measure extending $P_X$ to $\mathcal{A}_X$) are not, in general, uniquely determined by $F_X$ (a sufficient condition for uniqueness is so-called perfectness of the measure $P$; see Perfect measure, and also [GK]). This must constantly be borne in mind (for example, when proving that the distribution of a random variable is uniquely determined by its characteristic function).

If a random variable $X$ takes a finite or countable number of pairwise distinct values $x_1,\ldots,x_n,\ldots$, with probabilities $p_1,\ldots,p_n,\ldots$ ($p_n=P\{\omega\colon X(\omega)=x_n\}$), then its probability distribution (which is said to be discrete in this case) is given by \begin{equation} P_X(A)=\sum_{x_n\in A}p_n. \end{equation} The distribution of $X$ is called continuous if there is a function $p(x)$ (called the probability density) such that \begin{equation} P_X(B)=\int\limits_{B}p_X(x)\,dx. \end{equation} for every interval $B$ (or equivalently, for every Borel set $B$). In the usual terminology of mathematical analysis this means that $P_X$ is absolutely continuous with respect to Lebesgue measure on $\mathbf{R}^1$.

Several general properties of the probability distribution of a random variable $X$ are sufficiently characterized by a small number of numerical characteristics. For example, the median (in statistics) and quantiles have the advantage that they are defined for all distributions, although the most widely used are the mathematical expectation $\mathsf{E}X$ and the dispersion (or variance) $\mathsf{D}X$ of $X$. See also Probability theory.

A complex random variable $X$ is determined by a pair of real random variables $X_1$ and $X_2$ by the formula \begin{equation} X(\omega)=X_1(\omega)+iX_2(\omega). \end{equation} An ordered set $(X_1,\dots,X_S)$ of random variables can be regarded as a random vector with values in $\mathbf{R}^S$.

The notion of a random variable can be generalized to the infinite-dimensional case using the concept of a random element.

It is worth noting that in certain problems of mathematical analysis and number theory it is convenient to regard the functions involved in their formulation as random variables defined on suitable probability spaces (see [Ka] for example).

References

[C] P.L. Chebyshev, "On mean values" , Complete collected works , 2 , Moscow-Leningrad (1947) (In Russian)
[Ko] A.N. Kolmogorov, "Grundbegriffe der Wahrscheinlichkeitsrechnung" , Springer (1973) (Translated from Russian) MR0362415
[F] W. Feller, "An introduction to probability theory and its applications", 1 , Wiley (1950)
[GK] B.V. Gnedenko, A.N. Kolmogorov, "Limit distributions for sums of independent random variables" , Springer (1958) (Translated from Russian) MR0233400 MR0062975 MR0041377 Zbl 0056.36001
[Ka] M. Kac, "Statistical independence in probability, analysis and number theory" , Math. Assoc. Amer. (1959) MR0110114 Zbl 0088.10303

Comments

Other mathematicians adopting Kolmogorov's point of view are, e.g., J.L. Doob [D] and P. Lévy [L].

References

[D] J.L. Doob, "Stochastic processes depending on a continuous parameter" Trans. Amer. Math. Soc. , 42 (1937) pp. 107–140 MR1501916 Zbl 0017.02701 Zbl 63.1075.01
[L] P. Lévy, "Le mouvement brownien plan" Amer. J. Math. , 62 (1940) pp. 487–550 MR0002734 Zbl 0024.13906 Zbl 66.0619.02
How to Cite This Entry:
Random variable. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Random_variable&oldid=43639
This article was adapted from an original article by Yu.V. Prokhorov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article