# Random function

A function of an arbitrary argument $t$( defined on the set $T$ of its values, and taking numerical values or, more generally, values in a vector space) whose values are defined in terms of a certain experiment and may vary with the outcome of this experiment according to a given probability distribution. In probability theory, attention centres on numerical (that is, scalar) random functions $X ( t)$; a random vector function $\mathbf X ( t)$ can be regarded as the aggregate of the scalar functions $X _ \alpha ( t)$, where $\alpha$ ranges over the finite or countable set $A$ of components of $\mathbf X$, that is, as a numerical random function on the set $T _ {1} = T \times A$ of pairs $( t , \alpha )$, $t \in T$, $\alpha \in A$.

When $T$ is finite, $X ( t)$ is a finite set of random variables, and can be regarded as a multi-dimensional (vector) random variable characterized by a multi-dimensional distribution function. When $T$ is infinite, the case mostly studied is that in which $t$ takes numerical (real) values; in this case, $t$ usually denotes time, and $X ( t)$ is called a stochastic process, or, if $t$ takes only integral values, a random sequence (or time series). If the values of $t$ are the points of a manifold (such as a $k$- dimensional Euclidean space $\mathbf R ^ {k}$), then $X ( t)$ is called a random field.

The probability distribution of the values of a random function $X ( t)$ defined on an infinite set $T$ is characterized by the aggregate of finite-dimensional probability distributions of sets of random variables $X ( t _ {1} ) \dots X ( t _ {n} )$ corresponding to all finite subsets $\{ t _ {1} \dots t _ {n} \}$ of $T$, that is, the aggregate of corresponding finite-dimensional distribution functions $F _ {t _ {1} \dots t _ {n} } ( x _ {1} \dots x _ {n} )$, satisfying the consistency conditions:

$$\tag{1 } F _ {t _ {1} \dots t _ {n} , t _ {n+} 1 \dots t _ {n+} m } ( x _ {1} \dots x _ {n} , \infty \dots \infty ) =$$

$$= \ F _ {t _ {1} \dots t _ {n} } ( x _ {1} \dots x _ {n} ) ,$$

$$\tag{2 } F _ {t _ {i _ {1} } \dots t _ {i _ {n} } } ( x _ {i _ {1} } \dots x _ {i _ {n} } ) = F _ {t _ {1} \dots t _ {n} } ( x _ {1} \dots x _ {n} ) ,$$

where $i _ {1} \dots i _ {n}$ is an arbitrary permutation of the subscripts $1 \dots n$. This characterization of the probability distribution of $X ( t)$ is sufficient in all cases when one is only interested in events depending on the values of $X$ on countable subsets of $T$. But it does not enable one to determine the probability of properties of $X$ that depend on its values on a continuous subset of $T$, such as the probability of continuity or differentiability, or the probability that $X ( t) < a$ on a continuous subset of $T$( see Separable process).

Random functions can be described more generally in terms of aggregates of random variables $X = X ( \omega )$ defined on a fixed probability space $( \Omega , {\mathcal A} , {\mathsf P} )$( where $\Omega$ is a set of points $\omega$, ${\mathcal A}$ is a $\sigma$- algebra of subsets of $\Omega$ and ${\mathsf P}$ is a given probability measure on ${\mathcal A}$), one for each point $t$ of $T$. In this approach, a random function on $T$ is regarded as a function $X ( t , \omega )$ of two variables $t \in T$ and $\omega \in \Omega$ which is ${\mathcal A}$- measurable for every $t$( that is, for fixed $t$ it reduces to a random variable defined on the probability space $( \Omega , {\mathcal A} , {\mathsf P} )$). By taking a fixed value $\omega _ {0}$ of $\omega$, one obtains a numerical function $X ( t , \omega _ {0} ) = x ( t)$ on $T$, called a realization (or sample function or, when $t$ denotes time, a trajectory) of $X ( t)$; ${\mathcal A}$ and ${\mathsf P}$ induce a $\sigma$- algebra of subsets and a probability measure defined on it in the function space $\mathbf R ^ {T} = \{ {x ( t) } : {t \in T } \}$ of realizations $x ( t)$, whose specification can also be regarded as equivalent to that of the random function. The specification of a random function as a probability measure on a $\sigma$- algebra of subsets of the function space $\mathbf R ^ {T}$ of all possible realizations $x ( t)$ can be regarded as a special case of its general specification as a function of two variables $X ( t , \omega )$( where $\omega$ belongs to the probability space $( \Omega , {\mathcal A} , {\mathsf P} )$ in which $\Omega = \mathbf R ^ {T}$), that is, elementary events (points $\omega$ in the given probability space) are identified at the outset with the realizations $x ( t)$ of $X ( t)$. On the other hand, it is also possible to show that any other way of specifying $X ( t)$ can be reduced to this form using a special determination of a probability measure on $\mathbf R ^ {T}$. In particular, Kolmogorov's fundamental theorem on consistent distributions (see Probability space) shows that the specification of the aggregate of all possible finite-dimensional distribution functions $F _ {t _ {1} \dots t _ {n} } ( x _ {1} \dots x _ {n} )$ satisfying the above consistency conditions (1) and (2) defines a probability measure on the $\sigma$- algebra of subsets of the function space $\mathbf R ^ {T} = \{ {x ( t) } : {t \in T } \}$ generated by the aggregate of cylindrical sets (cf. Cylinder set) of the form $\{ {x ( t) } : {[ x ( t _ {1} ) \dots x ( t _ {n} ) ] \in B ^ {n} } \}$, where $n$ is an arbitrary positive integer and $B ^ {n}$ is an arbitrary Borel set of the $n$- dimensional space $\mathbf R ^ {n}$ of vectors $[ x ( t _ {1} ) \dots x ( t _ {n} ) ]$.

For references see Stochastic process.