Namespaces
Variants
Actions

Difference between revisions of "Stochastic process"

From Encyclopedia of Mathematics
Jump to: navigation, search
(MSC|60Gxx Category:Stochastic processes)
(latex details)
 
(4 intermediate revisions by 3 users not shown)
Line 1: Line 1:
 +
<!--
 +
s0901901.png
 +
$#A+1 = 192 n = 0
 +
$#C+1 = 192 : ~/encyclopedia/old_files/data/S090/S.0900190 Stochastic process,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''random process, probability process, random function of time''
 
''random process, probability process, random function of time''
  
Line 7: Line 19:
 
A process (that is, a variation with time of the state of a certain system) whose course depends on chance and for which probabilities for some courses are given. A typical example of this is [[Brownian motion|Brownian motion]]. Other examples of practical importance are: the fluctuation of current in an electrical circuit in the presence of so-called thermal noise, the random changes in the level of received radio-signals in the presence of random weakening of radio-signals (fading) created by meteorological or other disturbances, and the turbulent flow of a liquid or gas. To these can be added many industrial processes accompanied by random fluctuations, and also certain processes encountered in geophysics (e.g., variations of the Earth's magnetic field, unordered sea-waves and microseisms, that is, high-frequency irregular oscillations of the level of the surface of the Earth), biophysics (for example, variations of the bio-electric potential of the brain registered on an electro-encephalograph), and economics.
 
A process (that is, a variation with time of the state of a certain system) whose course depends on chance and for which probabilities for some courses are given. A typical example of this is [[Brownian motion|Brownian motion]]. Other examples of practical importance are: the fluctuation of current in an electrical circuit in the presence of so-called thermal noise, the random changes in the level of received radio-signals in the presence of random weakening of radio-signals (fading) created by meteorological or other disturbances, and the turbulent flow of a liquid or gas. To these can be added many industrial processes accompanied by random fluctuations, and also certain processes encountered in geophysics (e.g., variations of the Earth's magnetic field, unordered sea-waves and microseisms, that is, high-frequency irregular oscillations of the level of the surface of the Earth), biophysics (for example, variations of the bio-electric potential of the brain registered on an electro-encephalograph), and economics.
  
The mathematical theory of stochastic processes regards the instantaneous state of the system in question as a point of a certain phase space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s0901901.png" /> (the space of states), so that the stochastic process is a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s0901902.png" /> of the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s0901903.png" /> with values in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s0901904.png" />. It is usually assumed that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s0901905.png" /> is a vector space, the most studied case (and the most important one for applications) being the narrower one where the points of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s0901906.png" /> are given by one or more numerical parameters (a generalized coordinate system). In the narrow case a stochastic process can be regarded either simply as a numerical function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s0901907.png" /> of time taking various values depending on chance (i.e. admitting various realizations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s0901908.png" />, a one-dimensional stochastic process), or similarly as a vector function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s0901909.png" /> (a multi-dimensional or vector stochastic process). The study of multi-dimensional stochastic processes can be reduced to that of one-dimensional stochastic processes by passing from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019010.png" /> to an auxiliary process
+
The mathematical theory of stochastic processes regards the instantaneous state of the system in question as a point of a certain phase space $  R $(
 +
the space of states), so that the stochastic process is a function $  X ( t) $
 +
of the time $  t $
 +
with values in $  R $.  
 +
It is usually assumed that $  R $
 +
is a vector space, the most studied case (and the most important one for applications) being the narrower one where the points of $  R $
 +
are given by one or more numerical parameters (a generalized coordinate system). In the narrow case a stochastic process can be regarded either simply as a numerical function $  X ( t) $
 +
of time taking various values depending on chance (i.e. admitting various realizations $  x ( t) $,  
 +
a one-dimensional stochastic process), or similarly as a vector function $  \mathbf X ( t) = \{ X _ {1} ( t) \dots X _ {k} ( t) \} $(
 +
a multi-dimensional or vector stochastic process). The study of multi-dimensional stochastic processes can be reduced to that of one-dimensional stochastic processes by passing from $  \mathbf X ( t) $
 +
to an auxiliary process
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019011.png" /></td> </tr></table>
+
$$
 +
X _ {\mathbf a} ( t)  = ( \mathbf X ( t) , \mathbf a )  = \sum _ {j=1} ^ { k } a _ {j} X _ {j} ( t) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019012.png" /> is an arbitrary <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019013.png" />-dimensional vector. Therefore the study of one-dimensional processes occupies a central place in the theory of stochastic processes. The parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019014.png" /> usually takes arbitrary real values or values in an interval on the real axis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019015.png" /> (when one wishes to stress this, one speaks of a stochastic process in continuous time), but it may take only integral values, in which case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019016.png" /> is called a stochastic process in discrete time (or a random sequence or a time series).
+
where $  \mathbf a = ( a _ {1} \dots a _ {k} ) $
 +
is an arbitrary $  k $-
 +
dimensional vector. Therefore the study of one-dimensional processes occupies a central place in the theory of stochastic processes. The parameter $  t $
 +
usually takes arbitrary real values or values in an interval on the real axis $  \mathbf R  ^ {1} $(
 +
when one wishes to stress this, one speaks of a stochastic process in continuous time), but it may take only integral values, in which case $  X ( t) $
 +
is called a stochastic process in discrete time (or a random sequence or a time series).
  
The representation of a probability distribution in the infinite-dimensional space of all variants of the course of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019017.png" /> (that is, in the space of realizations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019018.png" />) does not fall within the scope of the classical methods of probability theory and requires the construction of a special mathematical apparatus. The only exceptions are special classes of stochastic processes whose probabilistic nature is completely determined by the dependence of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019019.png" /> on a certain finite-dimensional random vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019020.png" />, since in this case the probability of the course followed by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019021.png" /> depends only on the finite-dimensional probability distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019022.png" />. An example of a stochastic process of this type which is of practical importance is a random harmonic oscillation of the form
+
The representation of a probability distribution in the infinite-dimensional space of all variants of the course of $  X ( t) $(
 +
that is, in the space of realizations $  x ( t) $)  
 +
does not fall within the scope of the classical methods of probability theory and requires the construction of a special mathematical apparatus. The only exceptions are special classes of stochastic processes whose probabilistic nature is completely determined by the dependence of $  X ( t) = X ( t ;  \mathbf Y ) $
 +
on a certain finite-dimensional random vector $  \mathbf Y = ( Y _ {1} \dots Y _ {k} ) $,  
 +
since in this case the probability of the course followed by $  X ( t) $
 +
depends only on the finite-dimensional probability distribution of $  \mathbf Y $.  
 +
An example of a stochastic process of this type which is of practical importance is a random harmonic oscillation of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019023.png" /></td> </tr></table>
+
$$
 +
X ( t)  = A  \cos ( \omega t + \Phi ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019024.png" /> is a fixed number and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019025.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019026.png" /> are independent random variables. This process is often used in the investigation of amplitude-phase modulation in radio-technology.
+
where $  \omega $
 +
is a fixed number and $  A $
 +
and $  \Phi $
 +
are independent random variables. This process is often used in the investigation of amplitude-phase modulation in radio-technology.
  
A wide class of probability distributions for stochastic processes is characterized by an infinite family of compatible finite-dimensional probability distributions of the random vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019027.png" /> corresponding to all finite subsets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019028.png" /> of values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019029.png" /> (see [[Random function|Random function]]). However, knowledge of all these distributions is not sufficient to determine the probabilities of events depending on the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019030.png" /> for an uncountable set of values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019031.png" />, that is, it does not determine the stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019032.png" /> uniquely.
+
A wide class of probability distributions for stochastic processes is characterized by an infinite family of compatible finite-dimensional probability distributions of the random vectors $  \{ X ( t _ {1} ) \dots X ( t _ {n} ) \} $
 +
corresponding to all finite subsets $  ( t _ {1} \dots t _ {n} ) $
 +
of values of $  t $(
 +
see [[Random function|Random function]]). However, knowledge of all these distributions is not sufficient to determine the probabilities of events depending on the values of $  X ( t) $
 +
for an uncountable set of values of $  t $,  
 +
that is, it does not determine the stochastic process $  X ( t) $
 +
uniquely.
  
Example. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019033.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019034.png" />, be a harmonic oscillation with random phase <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019035.png" />. Let a random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019036.png" /> be uniformly distributed on the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019037.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019038.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019039.png" />, be the stochastic process given by the equations <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019040.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019041.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019042.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019043.png" />. Since <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019044.png" /> for any fixed finite set of points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019045.png" />, it follows that all the finite-dimensional distributions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019046.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019047.png" /> are identical. At the same time, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019048.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019049.png" /> are different: in particular, all realizations of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019050.png" /> are continuous (having sinusoidal form), while all realizations of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019051.png" /> have a point of discontinuity, and all realizations of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019052.png" /> do not exceed 1, but no realization of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019053.png" /> has this property. Hence it follows that a given system of finite-dimensional probability distributions can correspond to distinct modifications of a stochastic process, and one cannot compute, purely from knowledge of this system, either the probability that a realization of the stochastic process will be continuous, or the probability that it will be bounded by some fixed constant.
+
Example. Let $  X ( t) = \cos ( \omega t + \Phi ) $,  
 +
0 \leq  t \leq  1 $,  
 +
be a harmonic oscillation with random phase $  \Phi $.  
 +
Let a random variable $  Z $
 +
be uniformly distributed on the interval $  [ 0 , 1 ] $,
 +
and let $  X _ {1} ( t) $,  
 +
0 \leq  t \leq  1 $,  
 +
be the stochastic process given by the equations $  X _ {1} ( t) = X ( t) $
 +
when $  t \neq Z $,  
 +
$  X _ {1} ( t) = X ( t) + 3 $
 +
when $  t = Z $.  
 +
Since $  {\mathsf P} \{ Z = t _ {1}  \textrm{ or }  \dots \textrm{ or }  Z = t _ {n} \} = 0 $
 +
for any fixed finite set of points $  ( t _ {1} \dots t _ {n} ) $,  
 +
it follows that all the finite-dimensional distributions of $  X ( t) $
 +
and $  X _ {1} ( t) $
 +
are identical. At the same time, $  X ( t) $
 +
and $  X _ {1} ( t) $
 +
are different: in particular, all realizations of $  X ( t) $
 +
are continuous (having sinusoidal form), while all realizations of $  X _ {1} ( t) $
 +
have a point of discontinuity, and all realizations of $  X ( t) $
 +
do not exceed 1, but no realization of $  X _ {1} ( t) $
 +
has this property. Hence it follows that a given system of finite-dimensional probability distributions can correspond to distinct modifications of a stochastic process, and one cannot compute, purely from knowledge of this system, either the probability that a realization of the stochastic process will be continuous, or the probability that it will be bounded by some fixed constant.
  
However, from knowledge of all finite-dimensional probability distributions one can often clarify whether or not there exists a stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019054.png" /> that has these finite-dimensional distributions, and is such that its realizations are continuous (or differentiable or nowhere exceed a given constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019055.png" />) with probability 1. A typical example of a general condition guaranteeing the existence of a stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019056.png" /> with continuous realizations with probability 1 and given finite-dimensional distributions is Kolmogorov's condition: If the finite-dimensional probability distributions of a stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019057.png" />, defined on the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019058.png" />, are such that for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019059.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019060.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019061.png" />, and all sufficiently small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019062.png" />, the following inequality holds:
+
However, from knowledge of all finite-dimensional probability distributions one can often clarify whether or not there exists a stochastic process $  X ( t) $
 +
that has these finite-dimensional distributions, and is such that its realizations are continuous (or differentiable or nowhere exceed a given constant $  B $)  
 +
with probability 1. A typical example of a general condition guaranteeing the existence of a stochastic process $  X ( t) $
 +
with continuous realizations with probability 1 and given finite-dimensional distributions is Kolmogorov's condition: If the finite-dimensional probability distributions of a stochastic process $  X ( t) $,  
 +
defined on the interval $  [ a , b ] $,  
 +
are such that for some $  \alpha > 0 $,
 +
$  \delta > 0 $,  
 +
$  C < \infty $,  
 +
and all sufficiently small $  h $,  
 +
the following inequality holds:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019063.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
{\mathsf E} | X ( t + h ) - X ( t) |  ^  \alpha  < C  | h | ^ {1 + \delta }
 +
$$
  
(which evidently imposes restrictions only on the two-dimensional distributions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019064.png" />), then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019065.png" /> has a modification with continuous realizations with probability 1 (see [[#References|[1]]][[#References|[6]]], for example). In the special case of a Gaussian process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019066.png" />, condition (1) can be replaced by the weaker condition
+
(which evidently imposes restrictions only on the two-dimensional distributions of $  X ( t) $),  
 +
then $  X ( t) $
 +
has a modification with continuous realizations with probability 1 (see {{Cite|Sl}}{{Cite|We}}, for example). In the special case of a Gaussian process $  X ( t) $,  
 +
condition (1) can be replaced by the weaker condition
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019067.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
{\mathsf E} | X ( t + h ) - X ( t ) | ^ {\alpha _ {1} }
 +
< C _ {1} | h | ^ {\delta _ {1} }
 +
$$
  
for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019068.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019069.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019070.png" />. This holds with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019071.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019072.png" /> for the [[Wiener process|Wiener process]] and the [[Ornstein–Uhlenbeck process|Ornstein–Uhlenbeck process]], for example. In cases where, for given finite-dimensional probability distributions, there is a modification of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019073.png" /> whose realizations are continuous (or differentiable or bounded by a constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019074.png" />) with probability 1, all other modifications of the same process can usually be excluded from consideration by requiring that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019075.png" /> satisfies a certain very general regularity condition, which holds in almost-all applications (see [[Separable process|Separable process]]).
+
for some $  \alpha _ {1} > 0 $,
 +
$  \delta _ {1} > 0 $,  
 +
$  C _ {1} > 0 $.  
 +
This holds with $  \alpha _ {1} = 2 $
 +
and $  \delta _ {1} = 1 $
 +
for the [[Wiener process|Wiener process]] and the [[Ornstein–Uhlenbeck process|Ornstein–Uhlenbeck process]], for example. In cases where, for given finite-dimensional probability distributions, there is a modification of $  X ( t) $
 +
whose realizations are continuous (or differentiable or bounded by a constant $  B $)  
 +
with probability 1, all other modifications of the same process can usually be excluded from consideration by requiring that $  X ( t) $
 +
satisfies a certain very general regularity condition, which holds in almost-all applications (see [[Separable process|Separable process]]).
  
Instead of specifying the infinite system of finite-dimensional probability distributions of a stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019076.png" />, this can be defined using the values of the corresponding [[Characteristic functional|characteristic functional]]
+
Instead of specifying the infinite system of finite-dimensional probability distributions of a stochastic process $  X ( t) $,  
 +
this can be defined using the values of the corresponding [[Characteristic functional|characteristic functional]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019077.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
\psi [ l ]  = {\mathsf E}  \mathop{\rm exp} \{ i l [ X ] \} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019078.png" /> ranges over a sufficiently wide class of linear functionals depending on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019079.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019080.png" /> is continuous in probability for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019081.png" /> (that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019082.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019083.png" /> for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019084.png" />) and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019085.png" /> is a function of bounded variation on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019086.png" />, then
+
where $  l $
 +
ranges over a sufficiently wide class of linear functionals depending on $  X $.  
 +
If $  X $
 +
is continuous in probability for $  a \leq  t \leq  b $(
 +
that is, $  {\mathsf P} \{ | X ( t + h ) - X ( t) | > \epsilon \} \rightarrow 0 $
 +
as $  h \rightarrow 0 $
 +
for any $  \epsilon > 0 $)  
 +
and $  g $
 +
is a function of bounded variation on $  [ a , b ] $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019087.png" /></td> </tr></table>
+
$$
 +
\int\limits _ { a } ^ { b }  X ( t)  d g ( t)  = l  ^ {(g)} [ X ]
 +
$$
  
is a random variable. One may take <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019088.png" /> in (3), where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019089.png" /> is denoted by the symbol <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019090.png" /> for convenience. In many cases it is sufficient to consider only linear functionals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019091.png" /> of the form
+
is a random variable. One may take $  l [ X] = l  ^ {(g)} [ X] $
 +
in (3), where $  \psi [ l  ^ {(g)} ] $
 +
is denoted by the symbol $  \psi [ g] $
 +
for convenience. In many cases it is sufficient to consider only linear functionals $  l [ X] $
 +
of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019092.png" /></td> </tr></table>
+
$$
 +
\int\limits _ { a } ^ { b }  X ( t) \phi ( t)  d t  = l _  \phi  [ X] ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019093.png" /> is an infinitely-differentiable function of compact support in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019094.png" /> (and the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019095.png" /> may be taken finite). Under fairly general regularity conditions, the values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019096.png" /> uniquely determine all finite-dimensional probability distributions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019097.png" />, since
+
where $  \phi $
 +
is an infinitely-differentiable function of compact support in $  t $(
 +
and the interval $  [ a , b ] $
 +
may be taken finite). Under fairly general regularity conditions, the values $  \psi [ l _  \phi  ] = \psi [ \phi ] $
 +
uniquely determine all finite-dimensional probability distributions of $  X ( t) $,  
 +
since
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019098.png" /></td> </tr></table>
+
$$
 +
\psi [ \phi ]  \rightarrow  \psi _ {t _ {1}  \dots t _ {n} }
 +
( \theta _ {1} \dots \theta _ {n} ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s09019099.png" /> is the characteristic function of the random vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190100.png" />, as
+
where $  \psi _ {t _ {1}  \dots t _ {n} } ( \theta _ {1} \dots \theta _ {n} ) $
 +
is the characteristic function of the random vector $  \{ X ( t _ {1} ) \dots X ( t _ {n} ) \} $,  
 +
as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190101.png" /></td> </tr></table>
+
$$
 +
\phi ( t)  \rightarrow  \theta _ {1} \delta ( t - t _ {1} ) + \dots + \theta _ {n} \delta ( t - t _ {n} )
 +
$$
  
(here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190102.png" /> is the Dirac <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190103.png" />-function, and convergence is understood in the sense of convergence of generalized functions). If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190104.png" /> does not tend to a finite limit, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190105.png" /> has no finite values at any fixed point and only smoothed values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190106.png" /> have a meaning, that is, the characteristic functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190107.png" /> does not give an ordinary ( "classical" ) stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190108.png" />, but a generalized stochastic process (cf. [[Stochastic process, generalized|Stochastic process, generalized]]) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190109.png" />.
+
(here $  \delta ( t) $
 +
is the Dirac $  \delta $-
 +
function, and convergence is understood in the sense of convergence of generalized functions). If $  \psi [ \phi ] $
 +
does not tend to a finite limit, then $  X $
 +
has no finite values at any fixed point and only smoothed values $  l _  \phi  [ X] $
 +
have a meaning, that is, the characteristic functional $  \psi [ \phi ] $
 +
does not give an ordinary ( "classical" ) stochastic process $  X ( t) $,  
 +
but a generalized stochastic process (cf. [[Stochastic process, generalized|Stochastic process, generalized]]) $  X = X ( \phi ) $.
  
The problem of describing all finite-dimensional probability distributions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190110.png" /> is simplified in those cases when they are all uniquely determined by the distributions of only a few lower orders. The most important class of stochastic processes for which all multi-dimensional distributions are determined by the values of the one-dimensional distributions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190111.png" /> are sequences of independent random variables (which are special stochastic processes in discrete time). Such processes can be studied within the framework of classical probability theory, and it is important that some important classes of stochastic processes can be effectively specified as functions of a sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190112.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190113.png" /> of independent random variables. For example, the following stochastic processes are of significant interest:
+
The problem of describing all finite-dimensional probability distributions of $  X ( t) $
 +
is simplified in those cases when they are all uniquely determined by the distributions of only a few lower orders. The most important class of stochastic processes for which all multi-dimensional distributions are determined by the values of the one-dimensional distributions of $  X ( t) $
 +
are sequences of independent random variables (which are special stochastic processes in discrete time). Such processes can be studied within the framework of classical probability theory, and it is important that some important classes of stochastic processes can be effectively specified as functions of a sequence $  Y ( t) $,
 +
$  t = 0 , \pm  1 , \pm  2 \dots $
 +
of independent random variables. For example, the following stochastic processes are of significant interest:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190114.png" /></td> </tr></table>
+
$$
 +
X ( t)  = \sum _{j=0} ^  \infty  b _ {j} Y ( t - j )
 +
$$
  
 
or
 
or
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190115.png" /></td> </tr></table>
+
$$
 +
X ( t)  = \sum _ {j = - \infty } ^  \infty  b _ {j} Y ( t - j ) ,\ \
 +
t = 0 , \pm  1 ,\dots
 +
$$
  
(see [[Moving-average process|Moving-average process]]), and
+
(see [[Moving-average process]]), and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190116.png" /></td> </tr></table>
+
$$
 +
X ( t)  = \sum _{j=1} ^  \infty  Y ( j) h _ {j} ( t) ,\ a \leq  t \leq  b ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190117.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190118.png" /> is a prescribed system of functions on the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190119.png" /> (see [[Spectral decomposition of a random function|Spectral decomposition of a random function]]).
+
where $  h _ {j} $,
 +
$  j = 1 , 2 \dots $
 +
is a prescribed system of functions on the interval $  [ a , b ] $(
 +
see [[Spectral decomposition of a random function]]).
  
Three important classes of stochastic processes are described below, for which all finite-dimensional distributions are determined by the one-dimensional distributions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190120.png" /> and the two-dimensional distributions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190121.png" />.
+
Three important classes of stochastic processes are described below, for which all finite-dimensional distributions are determined by the one-dimensional distributions of $  X ( t) $
 +
and the two-dimensional distributions of $  \{ X ( t _ {1} ) , X ( t _ {2} ) \} $.
  
1) The class of stochastic processes with independent increments (cf. [[Stochastic process with independent increments|Stochastic process with independent increments]]) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190122.png" />, for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190123.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190124.png" /> are independent variables (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190125.png" />). To represent <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190126.png" /> on the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190127.png" /> it is convenient to use the distribution functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190128.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190129.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190130.png" />, of the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190131.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190132.png" />, in which case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190133.png" /> must evidently satisfy the functional equation
+
1) The class of stochastic processes with independent increments (cf. [[Stochastic process with independent increments|Stochastic process with independent increments]]) $  X ( t) $,  
 +
for which $  X ( t _ {2} ) - X ( t _ {1} ) $
 +
and $  X ( t _ {4} ) - X ( t _ {3} ) $
 +
are independent variables ( $  t _ {1} < t _ {2} \leq  t _ {3} < t _ {4} $).  
 +
To represent $  X ( t) $
 +
on the interval $  [ a, b] $
 +
it is convenient to use the distribution functions $  F _ {a} ( x) $
 +
and $  \Phi _ {t _ {1}  , t _ {2} } ( z) $,  
 +
where $  a \leq  t _ {1} \leq  t _ {2} \leq  b $,  
 +
of the random variables $  X ( a) $
 +
and $  X ( t _ {2} ) - X ( t _ {1} ) $,  
 +
in which case $  \Phi _ {t _ {1}  , t _ {2} } ( z) $
 +
must evidently satisfy the functional equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190134.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
\int\limits _ {- \infty } ^  \infty 
 +
\Phi _ {t _ {1}  , t _ {2} } ( z - u )  d \Phi _ {t _ {1}  , t _ {3} } ( u)  = \Phi _ {t _ {1}  , t _ {3} } ( z ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190135.png" /></td> </tr></table>
+
$$
 +
a  \leq  t _ {1}  < t _ {2}  < t _ {3}  \leq  b .
 +
$$
  
Using (4) it is possible to show that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190136.png" /> is continuous in probability, then its characteristic functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190137.png" /> can be written in the form
+
Using (4) it is possible to show that if $  X ( t) $
 +
is continuous in probability, then its characteristic functional $  \psi [ g ] $
 +
can be written in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190138.png" /></td> </tr></table>
+
$$
 +
\psi [ g ]  =   \mathop{\rm exp} \left \{
 +
i \int\limits _ { a } ^ { b }  \gamma ( t)  d g ( t)
 +
-
 +
\frac{1}{2}
 +
\int\limits _ { a } ^ { b }  \beta ( t) [ g ( b) - g ( t) ]  d g ( t) \right . +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190139.png" /></td> </tr></table>
+
$$
 +
+
 +
\int\limits _ {- \infty } ^  \infty  \int\limits _ { a } ^ { b }  \left [ e ^ {i y [
 +
g ( b) - g ( t) ] } - 1 -
 +
\frac{i y [ g ( b) - g ( t) ] }{1 + y  ^ {2} }
 +
\right ] \times
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190140.png" /></td> </tr></table>
+
$$
 +
\times \left .
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190141.png" /> is a continuous function, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190142.png" /> is a non-decreasing continuous function such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190143.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190144.png" /> is an increasing continuous measure on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190145.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190146.png" />.
+
\frac{1 + y  ^ {2} }{y}
 +
  ^ {2}  d _ {t} \Pi _ {t} ( d y ) \right \} ,
 +
$$
  
2) The class of Markov processes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190147.png" /> for which, when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190148.png" />, the conditional probability distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190149.png" /> given all values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190150.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190151.png" /> depends only on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190152.png" />. To represent a [[Markov process|Markov process]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190153.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190154.png" />, it is convenient to use the distribution function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190155.png" /> of the value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190156.png" /> and the transition function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190157.png" />, which is defined for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190158.png" /> as the conditional probability that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190159.png" /> given that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190160.png" />. The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190161.png" /> must satisfy the [[Kolmogorov–Chapman equation|Kolmogorov–Chapman equation]], similar to (4), and this enables one, under certain conditions, to obtain the simpler forward and backward [[Kolmogorov equation|Kolmogorov equation]] (e.g. the Fokker–Planck equation) for this function.
+
where  $  \gamma ( t) $
 +
is a continuous function, $  \beta ( t) $
 +
is a non-decreasing continuous function such that $  \beta ( a)= 0 $
 +
and $  \Pi _ {t} ( d y ) $
 +
is an increasing continuous measure on  $  \mathbf R $
 +
in  $  t $.
  
3) The class of Gaussian processes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190162.png" /> for which all multi-dimensional probability distributions of the vectors <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190163.png" /> are Gaussian (normal) distributions. Since a normal distribution is uniquely determined by its first and second moments, a [[Gaussian process|Gaussian process]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190164.png" /> is determined by the values of the functions
+
2) The class of Markov processes $  X ( t) $
 +
for which, when  $  t _ {1} < t _ {2} $,
 +
the conditional probability distribution of  $  X ( t _ {2} ) $
 +
given all values of $  X ( t) $
 +
for  $  t \leq  t _ {1} $
 +
depends only on  $  X ( t _ {1} ) $.  
 +
To represent a [[Markov process|Markov process]] $  X ( t) $,
 +
$  a \leq  t \leq  b $,
 +
it is convenient to use the distribution function  $  F _ {a} ( x) $
 +
of the value  $  X ( a) $
 +
and the transition function  $  \Phi _ {t _ {1}  , t _ {2} } ( x , z ) $,
 +
which is defined for  $  t _ {1} < t _ {2} $
 +
as the conditional probability that  $  X ( t _ {2} ) < z $
 +
given that  $  X ( t _ {1} ) = x $.  
 +
The function  $  \Phi _ {t _ {1}  , t _ {2} } ( x , z ) $
 +
must satisfy the [[Kolmogorov–Chapman equation|Kolmogorov–Chapman equation]], similar to (4), and this enables one, under certain conditions, to obtain the simpler forward and backward [[Kolmogorov equation|Kolmogorov equation]] (e.g. the Fokker–Planck equation) for this function.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190165.png" /></td> </tr></table>
+
3) The class of Gaussian processes  $  X ( t) $
 +
for which all multi-dimensional probability distributions of the vectors  $  \{ X ( t _ {1} ) \dots X ( t _ {n} ) \} $
 +
are Gaussian (normal) distributions. Since a normal distribution is uniquely determined by its first and second moments, a [[Gaussian process|Gaussian process]]  $  X ( t) $
 +
is determined by the values of the functions
 +
 
 +
$$
 +
{\mathsf E} X ( t)  = m ( t)
 +
$$
  
 
and
 
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190166.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} X ( t) X ( s)  = B ( t , s ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190167.png" /> must be a non-negative definite kernel such that
+
where $  B ( t , s ) $
 +
must be a non-negative definite kernel such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190168.png" /></td> </tr></table>
+
$$
 +
b ( t , s )  = B ( t , s ) - m ( t ) m ( s )
 +
$$
  
is a non-negative definite kernel. The characteristic functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190169.png" /> of a Gaussian process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190170.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190171.png" />, is
+
is a non-negative definite kernel. The characteristic functional $  \psi [ g ] $
 +
of a Gaussian process $  X ( t) $,  
 +
where $  a \leq  t \leq  b $,  
 +
is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190172.png" /></td> </tr></table>
+
$$
 +
\psi [ g ]  =   \mathop{\rm exp} \left \{ t \int\limits _ { a } ^ { b }
 +
m ( t)  d g ( t )
 +
-  
 +
\frac{1}{2}
 +
\int\limits _ { a } ^ { b }  \int\limits _ { a } ^ { b }  b ( t , s ) \
 +
d g ( t )  d g ( s) \right \} .
 +
$$
  
4) Another important class of stochastic processes is that of stationary stochastic processes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190173.png" />, where the statistical characteristics do not change in the course of time, that is, they are invariant under the transformation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190174.png" />, for any fixed number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190175.png" />. The multi-dimensional probability distributions of a general [[Stationary stochastic process|stationary stochastic process]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190176.png" /> cannot be described in a simple manner, but for many problems concerning such processes it is sufficient to know only the values of the first two moments, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190177.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190178.png" /> (so that here the only necessary assumption is of stationarity in the wide sense, i.e. the moments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190179.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190180.png" /> are independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190181.png" />). It is essential that any stationary stochastic process (at least in the wide sense) admits a spectral decomposition of the form
+
4) Another important class of stochastic processes is that of stationary stochastic processes $  X ( t) $,  
 +
where the statistical characteristics do not change in the course of time, that is, they are invariant under the transformation $  X ( t) \mapsto X ( t + a ) $,  
 +
for any fixed number $  a $.  
 +
The multi-dimensional probability distributions of a general [[Stationary stochastic process|stationary stochastic process]] $  X ( t) $
 +
cannot be described in a simple manner, but for many problems concerning such processes it is sufficient to know only the values of the first two moments, $  {\mathsf E} X ( t) = m $
 +
and  $  {\mathsf E} X ( t) X ( t + s ) = B ( s) $(
 +
so that here the only necessary assumption is of stationarity in the wide sense, i.e. the moments $  {\mathsf E} X ( t) $
 +
and $  {\mathsf E} X ( t) X ( t + s) $
 +
are independent of $  t $).  
 +
It is essential that any stationary stochastic process (at least in the wide sense) admits a spectral decomposition of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190182.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
X ( t)  = \int\limits _ {- \infty } ^  \infty 
 +
e ^ {i t \lambda }  d Z ( \lambda ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190183.png" /> is a stochastic process with non-correlated increments. In particular, it follows that
+
where $  Z ( \lambda ) $
 +
is a stochastic process with non-correlated increments. In particular, it follows that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190184.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
B ( s= \int\limits _ {- \infty } ^  \infty 
 +
e ^ {i t \lambda }  d F ( \lambda ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190185.png" /> is the monotone non-decreasing spectral function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190186.png" /> (cf. [[Spectral function of a stationary stochastic process|Spectral function of a stationary stochastic process]]). The spectral decompositions (5) and (6) lie at the heart of the solution of problems of best (in the sense of minimal mean-square error) linear extrapolation, interpolation and filtering of stationary stochastic processes.
+
where $  F ( \lambda ) $
 +
is the monotone non-decreasing spectral function of $  X ( t) $(
 +
cf. [[Spectral function of a stationary stochastic process|Spectral function of a stationary stochastic process]]). The spectral decompositions (5) and (6) lie at the heart of the solution of problems of best (in the sense of minimal mean-square error) linear extrapolation, interpolation and filtering of stationary stochastic processes.
  
 
The mathematical theory of stochastic processes also includes a large number of results related to a series of subclasses or, conversely, of extensions, of the above classes of stochastic processes (see [[Markov chain|Markov chain]]; [[Diffusion process|Diffusion process]]; [[Branching process|Branching process]]; [[Martingale|Martingale]]; [[Stochastic process with stationary increments|Stochastic process with stationary increments]]; etc.).
 
The mathematical theory of stochastic processes also includes a large number of results related to a series of subclasses or, conversely, of extensions, of the above classes of stochastic processes (see [[Markov chain|Markov chain]]; [[Diffusion process|Diffusion process]]; [[Branching process|Branching process]]; [[Martingale|Martingale]]; [[Stochastic process with stationary increments|Stochastic process with stationary increments]]; etc.).
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> E.E. Slutskii, , ''Selected works'' , Moscow (1980) pp. 269–280 (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> J.L. Doob, "Stochastic processes" , Wiley (1953) {{MR|1570654}} {{MR|0058896}} {{ZBL|0053.26802}} </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> I.I. Gikhman, A.V. Skorokhod, "Introduction to the theory of stochastic processes" , Saunders (1967) (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> I.I. [I.I. Gikhman] Gihman, A.V. [A.V. Skorokhod] Skorohod, "Theory of stochastic processes" , '''1–3''' , Springer (1974–1979) (Translated from Russian) {{MR|0636254}} {{MR|0651015}} {{MR|0375463}} {{MR|0350794}} {{MR|0346882}} {{ZBL|0531.60002}} {{ZBL|0531.60001}} {{ZBL|0404.60061}} {{ZBL|0305.60027}} {{ZBL|0291.60019}} </TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> H. Cramér, M.R. Leadbetter, "Stationary and related stochastic processes" , Wiley (1967) {{MR|0217860}} {{ZBL|0162.21102}} </TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> A.D. [A.D. Ventsel'] Wentzell, "A course in the theory of stochastic processes" , McGraw-Hill (1981) (Translated from Russian) {{MR|0781738}} {{MR|0614594}} {{ZBL|0502.60001}} </TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> Yu.A. Tozanov, "Stochastic processes" , '''1–2''' , Moscow (1960–1963) (In Russian)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top"> A.V. [A.V. Skorokhod] Skorohod, "Random processes with independent increments" , Kluwer (1991) (Translated from Russian) {{MR|1155400}} {{ZBL|}} </TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top"> E.B. Dynkin, "Markov processes" , '''1–2''' , Springer (1965) (Translated from Russian) {{MR|0193671}} {{ZBL|0132.37901}} </TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top"> I.A. Ibragimov, Yu.A. Rozanov, "Gaussian stochastic processes" , Springer (1978) (Translated from Russian) {{MR|0272040}} {{ZBL|}} </TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top"> Yu.A. Rozanov, "Stationary stochastic processes" , Holden-Day (1967) (Translated from Russian) {{MR|0159363}} {{MR|0114252}} {{ZBL|0721.60040}} </TD></TR></table>
+
{|
 
+
|valign="top"|{{Ref|Sl}}|| E.E. Slutskii, ''Selected works'' , Moscow (1980) pp. 269–280 (In Russian)
 
+
|-
 +
|valign="top"|{{Ref|Do}}|| J.L. Doob, "Stochastic processes" , Wiley (1953) {{MR|1570654}} {{MR|0058896}} {{ZBL|0053.26802}}
 +
|-
 +
|valign="top"|{{Ref|GS}}|| I.I. Gihman, A.V. Skorohod, "Introduction to the theory of stochastic processes" , Saunders (1967) (Translated from Russian)
 +
|-
 +
|valign="top"|{{Ref|GS2}}|| I.I. Gihman, A.V. Skorohod, "Theory of stochastic processes" , '''1–3''' , Springer (1974–1979) (Translated from Russian) {{MR|0636254}} {{MR|0651015}} {{MR|0375463}} {{MR|0350794}} {{MR|0346882}} {{ZBL|0531.60002}} {{ZBL|0531.60001}} {{ZBL|0404.60061}} {{ZBL|0305.60027}} {{ZBL|0291.60019}}
 +
|-
 +
|valign="top"|{{Ref|CL}}|| H. Cramér, M.R. Leadbetter, "Stationary and related stochastic processes" , Wiley (1967) {{MR|0217860}} {{ZBL|0162.21102}}
 +
|-
 +
|valign="top"|{{Ref|We}}|| A.D. Wentzell, "A course in the theory of stochastic processes" , McGraw-Hill (1981) (Translated from Russian) {{MR|0781738}} {{MR|0614594}} {{ZBL|0502.60001}}
 +
|-
 +
|valign="top"|{{Ref|Rz}}|| Yu.A. Rozanov, "Stochastic processes" , '''1–2''' , Moscow (1960–1963) (In Russian)
 +
|-
 +
|valign="top"|{{Ref|Sk}}|| A.V. Skorohod, "Random processes with independent increments" , Kluwer (1991) (Translated from Russian) {{MR|1155400}} {{ZBL|}}
 +
|-
 +
|valign="top"|{{Ref|Dy}}|| E.B. Dynkin, "Markov processes" , '''1–2''' , Springer (1965) (Translated from Russian) {{MR|0193671}} {{ZBL|0132.37901}}
 +
|-
 +
|valign="top"|{{Ref|IR}}|| I.A. Ibragimov, Yu.A. Rozanov, "Gaussian stochastic processes" , Springer (1978) (Translated from Russian) {{MR|0272040}} {{ZBL|}}
 +
|-
 +
|valign="top"|{{Ref|Rz2}}|| Yu.A. Rozanov, "Stationary stochastic processes" , Holden-Day (1967) (Translated from Russian) {{MR|0159363}} {{MR|0114252}} {{ZBL|0721.60040}}
 +
|}
  
 
====Comments====
 
====Comments====
The state space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190187.png" /> of a stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190188.png" /> may be a (good) topological space without algebraic structure as in Markov process theory; in this case real processes of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190189.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190190.png" /> is a real function on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190191.png" />, are considered; it can be also a differentiable manifold, as in modern diffusion process theory, etc. Concerning the regularity properties of the paths, often it is not possible to prove that the considered set of regular paths has probability 1 because this set is not measurable, but it is often possible to circumvent this difficulty by proving that the outer probability is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090190/s090190192.png" />.
+
The state space $  E $
 +
of a stochastic process $  X $
 +
may be a (good) topological space without algebraic structure as in Markov process theory; in this case real processes of the form $  f \circ X $,  
 +
where $  f $
 +
is a real function on $  E $,  
 +
are considered; it can be also a differentiable manifold, as in modern diffusion process theory, etc. Concerning the regularity properties of the paths, often it is not possible to prove that the considered set of regular paths has probability 1 because this set is not measurable, but it is often possible to circumvent this difficulty by proving that the outer probability is $  1 $.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> N.T.J. Bailey, "The elements of stochastic processes" , Wiley (1964) {{MR|0165572}} {{ZBL|0127.11203}} </TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> K.L. Chung, "Lectures from Markov processes to Brownian motion" , Springer (1982) {{MR|0648601}} {{ZBL|0503.60073}} </TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> D.R. Cox, H.D. Miller, "The theory of stochastic processes" , Methuen (1965) {{MR|0192521}} {{ZBL|0149.12902}} </TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> R. Iranpour, P. Chacon, "Basic stochastic processes" , ''The Marc Kac lectures'' , Macmillan (1988) {{MR|0965763}} {{ZBL|0681.60035}} </TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> N.G. van Kampen, "Stochastic processes in physics and chemistry" , North-Holland (1981) {{MR|}} {{ZBL|0511.60038}} </TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> P. Lévy, "Processus stochastiques et mouvement Brownien" , Gauthier-Villars (1965) {{MR|0190953}} {{ZBL|0137.11602}} </TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top"> E. Parzen, "Stochastic processes" , Holden-Day (1967) {{MR|1699272}} {{MR|1532996}} {{MR|0139192}} {{MR|0095531}} {{MR|0084899}} {{ZBL|0932.60001}} {{ZBL|0107.12301}} {{ZBL|0079.34601}} </TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top"> M. Rosenblatt, "Random processes" , Springer (1974) {{MR|0346883}} {{ZBL|0287.60031}} </TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top"> N. Wax (ed.) , ''Selected papers on noise and stochastic processes'' , Dover, reprint (1954) {{MR|}} {{ZBL|0059.11903}} </TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top"> E. Wong, "Stochastic processes in information and dynamical systems" , McGraw-Hill (1971) {{MR|0415698}} {{ZBL|0245.60001}} </TD></TR><TR><TD valign="top">[a11]</TD> <TD valign="top"> K. Ethier, "Markov processes" , Wiley (1986) {{MR|0838085}} {{ZBL|0592.60049}} </TD></TR><TR><TD valign="top">[a12]</TD> <TD valign="top"> R. Durrett, "Brownian motion and martingales in analysis" , Wadsworth (1984) {{MR|0750829}} {{ZBL|0554.60075}} </TD></TR></table>
+
{|
 +
|valign="top"|{{Ref|B}}|| N.T.J. Bailey, "The elements of stochastic processes" , Wiley (1964) {{MR|0165572}} {{ZBL|0127.11203}}
 +
|-
 +
|valign="top"|{{Ref|C}}|| K.L. Chung, "Lectures from Markov processes to Brownian motion" , Springer (1982) {{MR|0648601}} {{ZBL|0503.60073}}
 +
|-
 +
|valign="top"|{{Ref|CM}}|| D.R. Cox, H.D. Miller, "The theory of stochastic processes" , Methuen (1965) {{MR|0192521}} {{ZBL|0149.12902}}
 +
|-
 +
|valign="top"|{{Ref|IC}}|| R. Iranpour, P. Chacon, "Basic stochastic processes" , ''The Marc Kac lectures'' , Macmillan (1988) {{MR|0965763}} {{ZBL|0681.60035}}
 +
|-
 +
|valign="top"|{{Ref|K}}|| N.G. van Kampen, "Stochastic processes in physics and chemistry" , North-Holland (1981) {{MR|}} {{ZBL|0511.60038}}
 +
|-
 +
|valign="top"|{{Ref|L}}|| P. Lévy, "Processus stochastiques et mouvement Brownien" , Gauthier-Villars (1965) {{MR|0190953}} {{ZBL|0137.11602}}
 +
|-
 +
|valign="top"|{{Ref|P}}|| E. Parzen, "Stochastic processes" , Holden-Day (1967) {{MR|1699272}} {{MR|1532996}} {{MR|0139192}} {{MR|0095531}} {{MR|0084899}} {{ZBL|0932.60001}} {{ZBL|0107.12301}} {{ZBL|0079.34601}}
 +
|-
 +
|valign="top"|{{Ref|Rs}}|| M. Rosenblatt, "Random processes" , Springer (1974) {{MR|0346883}} {{ZBL|0287.60031}}
 +
|-
 +
|valign="top"|{{Ref|Wa}}|| N. Wax (ed.), ''Selected papers on noise and stochastic processes'' , Dover, reprint (1954) {{MR|}} {{ZBL|0059.11903}}
 +
|-
 +
|valign="top"|{{Ref|Wo}}|| E. Wong, "Stochastic processes in information and dynamical systems" , McGraw-Hill (1971) {{MR|0415698}} {{ZBL|0245.60001}}
 +
|-
 +
|valign="top"|{{Ref|E}}|| K. Ethier, "Markov processes" , Wiley (1986) {{MR|0838085}} {{ZBL|0592.60049}}
 +
|-
 +
|valign="top"|{{Ref|Du}}|| R. Durrett, "Brownian motion and martingales in analysis" , Wadsworth (1984) {{MR|0750829}} {{ZBL|0554.60075}}
 +
|}

Latest revision as of 20:25, 16 January 2024


random process, probability process, random function of time

2020 Mathematics Subject Classification: Primary: 60Gxx [MSN][ZBL]

A process (that is, a variation with time of the state of a certain system) whose course depends on chance and for which probabilities for some courses are given. A typical example of this is Brownian motion. Other examples of practical importance are: the fluctuation of current in an electrical circuit in the presence of so-called thermal noise, the random changes in the level of received radio-signals in the presence of random weakening of radio-signals (fading) created by meteorological or other disturbances, and the turbulent flow of a liquid or gas. To these can be added many industrial processes accompanied by random fluctuations, and also certain processes encountered in geophysics (e.g., variations of the Earth's magnetic field, unordered sea-waves and microseisms, that is, high-frequency irregular oscillations of the level of the surface of the Earth), biophysics (for example, variations of the bio-electric potential of the brain registered on an electro-encephalograph), and economics.

The mathematical theory of stochastic processes regards the instantaneous state of the system in question as a point of a certain phase space $ R $( the space of states), so that the stochastic process is a function $ X ( t) $ of the time $ t $ with values in $ R $. It is usually assumed that $ R $ is a vector space, the most studied case (and the most important one for applications) being the narrower one where the points of $ R $ are given by one or more numerical parameters (a generalized coordinate system). In the narrow case a stochastic process can be regarded either simply as a numerical function $ X ( t) $ of time taking various values depending on chance (i.e. admitting various realizations $ x ( t) $, a one-dimensional stochastic process), or similarly as a vector function $ \mathbf X ( t) = \{ X _ {1} ( t) \dots X _ {k} ( t) \} $( a multi-dimensional or vector stochastic process). The study of multi-dimensional stochastic processes can be reduced to that of one-dimensional stochastic processes by passing from $ \mathbf X ( t) $ to an auxiliary process

$$ X _ {\mathbf a} ( t) = ( \mathbf X ( t) , \mathbf a ) = \sum _ {j=1} ^ { k } a _ {j} X _ {j} ( t) , $$

where $ \mathbf a = ( a _ {1} \dots a _ {k} ) $ is an arbitrary $ k $- dimensional vector. Therefore the study of one-dimensional processes occupies a central place in the theory of stochastic processes. The parameter $ t $ usually takes arbitrary real values or values in an interval on the real axis $ \mathbf R ^ {1} $( when one wishes to stress this, one speaks of a stochastic process in continuous time), but it may take only integral values, in which case $ X ( t) $ is called a stochastic process in discrete time (or a random sequence or a time series).

The representation of a probability distribution in the infinite-dimensional space of all variants of the course of $ X ( t) $( that is, in the space of realizations $ x ( t) $) does not fall within the scope of the classical methods of probability theory and requires the construction of a special mathematical apparatus. The only exceptions are special classes of stochastic processes whose probabilistic nature is completely determined by the dependence of $ X ( t) = X ( t ; \mathbf Y ) $ on a certain finite-dimensional random vector $ \mathbf Y = ( Y _ {1} \dots Y _ {k} ) $, since in this case the probability of the course followed by $ X ( t) $ depends only on the finite-dimensional probability distribution of $ \mathbf Y $. An example of a stochastic process of this type which is of practical importance is a random harmonic oscillation of the form

$$ X ( t) = A \cos ( \omega t + \Phi ) , $$

where $ \omega $ is a fixed number and $ A $ and $ \Phi $ are independent random variables. This process is often used in the investigation of amplitude-phase modulation in radio-technology.

A wide class of probability distributions for stochastic processes is characterized by an infinite family of compatible finite-dimensional probability distributions of the random vectors $ \{ X ( t _ {1} ) \dots X ( t _ {n} ) \} $ corresponding to all finite subsets $ ( t _ {1} \dots t _ {n} ) $ of values of $ t $( see Random function). However, knowledge of all these distributions is not sufficient to determine the probabilities of events depending on the values of $ X ( t) $ for an uncountable set of values of $ t $, that is, it does not determine the stochastic process $ X ( t) $ uniquely.

Example. Let $ X ( t) = \cos ( \omega t + \Phi ) $, $ 0 \leq t \leq 1 $, be a harmonic oscillation with random phase $ \Phi $. Let a random variable $ Z $ be uniformly distributed on the interval $ [ 0 , 1 ] $, and let $ X _ {1} ( t) $, $ 0 \leq t \leq 1 $, be the stochastic process given by the equations $ X _ {1} ( t) = X ( t) $ when $ t \neq Z $, $ X _ {1} ( t) = X ( t) + 3 $ when $ t = Z $. Since $ {\mathsf P} \{ Z = t _ {1} \textrm{ or } \dots \textrm{ or } Z = t _ {n} \} = 0 $ for any fixed finite set of points $ ( t _ {1} \dots t _ {n} ) $, it follows that all the finite-dimensional distributions of $ X ( t) $ and $ X _ {1} ( t) $ are identical. At the same time, $ X ( t) $ and $ X _ {1} ( t) $ are different: in particular, all realizations of $ X ( t) $ are continuous (having sinusoidal form), while all realizations of $ X _ {1} ( t) $ have a point of discontinuity, and all realizations of $ X ( t) $ do not exceed 1, but no realization of $ X _ {1} ( t) $ has this property. Hence it follows that a given system of finite-dimensional probability distributions can correspond to distinct modifications of a stochastic process, and one cannot compute, purely from knowledge of this system, either the probability that a realization of the stochastic process will be continuous, or the probability that it will be bounded by some fixed constant.

However, from knowledge of all finite-dimensional probability distributions one can often clarify whether or not there exists a stochastic process $ X ( t) $ that has these finite-dimensional distributions, and is such that its realizations are continuous (or differentiable or nowhere exceed a given constant $ B $) with probability 1. A typical example of a general condition guaranteeing the existence of a stochastic process $ X ( t) $ with continuous realizations with probability 1 and given finite-dimensional distributions is Kolmogorov's condition: If the finite-dimensional probability distributions of a stochastic process $ X ( t) $, defined on the interval $ [ a , b ] $, are such that for some $ \alpha > 0 $, $ \delta > 0 $, $ C < \infty $, and all sufficiently small $ h $, the following inequality holds:

$$ \tag{1 } {\mathsf E} | X ( t + h ) - X ( t) | ^ \alpha < C | h | ^ {1 + \delta } $$

(which evidently imposes restrictions only on the two-dimensional distributions of $ X ( t) $), then $ X ( t) $ has a modification with continuous realizations with probability 1 (see [Sl][We], for example). In the special case of a Gaussian process $ X ( t) $, condition (1) can be replaced by the weaker condition

$$ \tag{2 } {\mathsf E} | X ( t + h ) - X ( t ) | ^ {\alpha _ {1} } < C _ {1} | h | ^ {\delta _ {1} } $$

for some $ \alpha _ {1} > 0 $, $ \delta _ {1} > 0 $, $ C _ {1} > 0 $. This holds with $ \alpha _ {1} = 2 $ and $ \delta _ {1} = 1 $ for the Wiener process and the Ornstein–Uhlenbeck process, for example. In cases where, for given finite-dimensional probability distributions, there is a modification of $ X ( t) $ whose realizations are continuous (or differentiable or bounded by a constant $ B $) with probability 1, all other modifications of the same process can usually be excluded from consideration by requiring that $ X ( t) $ satisfies a certain very general regularity condition, which holds in almost-all applications (see Separable process).

Instead of specifying the infinite system of finite-dimensional probability distributions of a stochastic process $ X ( t) $, this can be defined using the values of the corresponding characteristic functional

$$ \tag{3 } \psi [ l ] = {\mathsf E} \mathop{\rm exp} \{ i l [ X ] \} , $$

where $ l $ ranges over a sufficiently wide class of linear functionals depending on $ X $. If $ X $ is continuous in probability for $ a \leq t \leq b $( that is, $ {\mathsf P} \{ | X ( t + h ) - X ( t) | > \epsilon \} \rightarrow 0 $ as $ h \rightarrow 0 $ for any $ \epsilon > 0 $) and $ g $ is a function of bounded variation on $ [ a , b ] $, then

$$ \int\limits _ { a } ^ { b } X ( t) d g ( t) = l ^ {(g)} [ X ] $$

is a random variable. One may take $ l [ X] = l ^ {(g)} [ X] $ in (3), where $ \psi [ l ^ {(g)} ] $ is denoted by the symbol $ \psi [ g] $ for convenience. In many cases it is sufficient to consider only linear functionals $ l [ X] $ of the form

$$ \int\limits _ { a } ^ { b } X ( t) \phi ( t) d t = l _ \phi [ X] , $$

where $ \phi $ is an infinitely-differentiable function of compact support in $ t $( and the interval $ [ a , b ] $ may be taken finite). Under fairly general regularity conditions, the values $ \psi [ l _ \phi ] = \psi [ \phi ] $ uniquely determine all finite-dimensional probability distributions of $ X ( t) $, since

$$ \psi [ \phi ] \rightarrow \psi _ {t _ {1} \dots t _ {n} } ( \theta _ {1} \dots \theta _ {n} ) , $$

where $ \psi _ {t _ {1} \dots t _ {n} } ( \theta _ {1} \dots \theta _ {n} ) $ is the characteristic function of the random vector $ \{ X ( t _ {1} ) \dots X ( t _ {n} ) \} $, as

$$ \phi ( t) \rightarrow \theta _ {1} \delta ( t - t _ {1} ) + \dots + \theta _ {n} \delta ( t - t _ {n} ) $$

(here $ \delta ( t) $ is the Dirac $ \delta $- function, and convergence is understood in the sense of convergence of generalized functions). If $ \psi [ \phi ] $ does not tend to a finite limit, then $ X $ has no finite values at any fixed point and only smoothed values $ l _ \phi [ X] $ have a meaning, that is, the characteristic functional $ \psi [ \phi ] $ does not give an ordinary ( "classical" ) stochastic process $ X ( t) $, but a generalized stochastic process (cf. Stochastic process, generalized) $ X = X ( \phi ) $.

The problem of describing all finite-dimensional probability distributions of $ X ( t) $ is simplified in those cases when they are all uniquely determined by the distributions of only a few lower orders. The most important class of stochastic processes for which all multi-dimensional distributions are determined by the values of the one-dimensional distributions of $ X ( t) $ are sequences of independent random variables (which are special stochastic processes in discrete time). Such processes can be studied within the framework of classical probability theory, and it is important that some important classes of stochastic processes can be effectively specified as functions of a sequence $ Y ( t) $, $ t = 0 , \pm 1 , \pm 2 \dots $ of independent random variables. For example, the following stochastic processes are of significant interest:

$$ X ( t) = \sum _{j=0} ^ \infty b _ {j} Y ( t - j ) $$

or

$$ X ( t) = \sum _ {j = - \infty } ^ \infty b _ {j} Y ( t - j ) ,\ \ t = 0 , \pm 1 ,\dots $$

(see Moving-average process), and

$$ X ( t) = \sum _{j=1} ^ \infty Y ( j) h _ {j} ( t) ,\ a \leq t \leq b , $$

where $ h _ {j} $, $ j = 1 , 2 \dots $ is a prescribed system of functions on the interval $ [ a , b ] $( see Spectral decomposition of a random function).

Three important classes of stochastic processes are described below, for which all finite-dimensional distributions are determined by the one-dimensional distributions of $ X ( t) $ and the two-dimensional distributions of $ \{ X ( t _ {1} ) , X ( t _ {2} ) \} $.

1) The class of stochastic processes with independent increments (cf. Stochastic process with independent increments) $ X ( t) $, for which $ X ( t _ {2} ) - X ( t _ {1} ) $ and $ X ( t _ {4} ) - X ( t _ {3} ) $ are independent variables ( $ t _ {1} < t _ {2} \leq t _ {3} < t _ {4} $). To represent $ X ( t) $ on the interval $ [ a, b] $ it is convenient to use the distribution functions $ F _ {a} ( x) $ and $ \Phi _ {t _ {1} , t _ {2} } ( z) $, where $ a \leq t _ {1} \leq t _ {2} \leq b $, of the random variables $ X ( a) $ and $ X ( t _ {2} ) - X ( t _ {1} ) $, in which case $ \Phi _ {t _ {1} , t _ {2} } ( z) $ must evidently satisfy the functional equation

$$ \tag{4 } \int\limits _ {- \infty } ^ \infty \Phi _ {t _ {1} , t _ {2} } ( z - u ) d \Phi _ {t _ {1} , t _ {3} } ( u) = \Phi _ {t _ {1} , t _ {3} } ( z ) , $$

$$ a \leq t _ {1} < t _ {2} < t _ {3} \leq b . $$

Using (4) it is possible to show that if $ X ( t) $ is continuous in probability, then its characteristic functional $ \psi [ g ] $ can be written in the form

$$ \psi [ g ] = \mathop{\rm exp} \left \{ i \int\limits _ { a } ^ { b } \gamma ( t) d g ( t) - \frac{1}{2} \int\limits _ { a } ^ { b } \beta ( t) [ g ( b) - g ( t) ] d g ( t) \right . + $$

$$ + \int\limits _ {- \infty } ^ \infty \int\limits _ { a } ^ { b } \left [ e ^ {i y [ g ( b) - g ( t) ] } - 1 - \frac{i y [ g ( b) - g ( t) ] }{1 + y ^ {2} } \right ] \times $$

$$ \times \left . \frac{1 + y ^ {2} }{y} ^ {2} d _ {t} \Pi _ {t} ( d y ) \right \} , $$

where $ \gamma ( t) $ is a continuous function, $ \beta ( t) $ is a non-decreasing continuous function such that $ \beta ( a)= 0 $ and $ \Pi _ {t} ( d y ) $ is an increasing continuous measure on $ \mathbf R $ in $ t $.

2) The class of Markov processes $ X ( t) $ for which, when $ t _ {1} < t _ {2} $, the conditional probability distribution of $ X ( t _ {2} ) $ given all values of $ X ( t) $ for $ t \leq t _ {1} $ depends only on $ X ( t _ {1} ) $. To represent a Markov process $ X ( t) $, $ a \leq t \leq b $, it is convenient to use the distribution function $ F _ {a} ( x) $ of the value $ X ( a) $ and the transition function $ \Phi _ {t _ {1} , t _ {2} } ( x , z ) $, which is defined for $ t _ {1} < t _ {2} $ as the conditional probability that $ X ( t _ {2} ) < z $ given that $ X ( t _ {1} ) = x $. The function $ \Phi _ {t _ {1} , t _ {2} } ( x , z ) $ must satisfy the Kolmogorov–Chapman equation, similar to (4), and this enables one, under certain conditions, to obtain the simpler forward and backward Kolmogorov equation (e.g. the Fokker–Planck equation) for this function.

3) The class of Gaussian processes $ X ( t) $ for which all multi-dimensional probability distributions of the vectors $ \{ X ( t _ {1} ) \dots X ( t _ {n} ) \} $ are Gaussian (normal) distributions. Since a normal distribution is uniquely determined by its first and second moments, a Gaussian process $ X ( t) $ is determined by the values of the functions

$$ {\mathsf E} X ( t) = m ( t) $$

and

$$ {\mathsf E} X ( t) X ( s) = B ( t , s ) , $$

where $ B ( t , s ) $ must be a non-negative definite kernel such that

$$ b ( t , s ) = B ( t , s ) - m ( t ) m ( s ) $$

is a non-negative definite kernel. The characteristic functional $ \psi [ g ] $ of a Gaussian process $ X ( t) $, where $ a \leq t \leq b $, is

$$ \psi [ g ] = \mathop{\rm exp} \left \{ t \int\limits _ { a } ^ { b } m ( t) d g ( t ) - \frac{1}{2} \int\limits _ { a } ^ { b } \int\limits _ { a } ^ { b } b ( t , s ) \ d g ( t ) d g ( s) \right \} . $$

4) Another important class of stochastic processes is that of stationary stochastic processes $ X ( t) $, where the statistical characteristics do not change in the course of time, that is, they are invariant under the transformation $ X ( t) \mapsto X ( t + a ) $, for any fixed number $ a $. The multi-dimensional probability distributions of a general stationary stochastic process $ X ( t) $ cannot be described in a simple manner, but for many problems concerning such processes it is sufficient to know only the values of the first two moments, $ {\mathsf E} X ( t) = m $ and $ {\mathsf E} X ( t) X ( t + s ) = B ( s) $( so that here the only necessary assumption is of stationarity in the wide sense, i.e. the moments $ {\mathsf E} X ( t) $ and $ {\mathsf E} X ( t) X ( t + s) $ are independent of $ t $). It is essential that any stationary stochastic process (at least in the wide sense) admits a spectral decomposition of the form

$$ \tag{5 } X ( t) = \int\limits _ {- \infty } ^ \infty e ^ {i t \lambda } d Z ( \lambda ) , $$

where $ Z ( \lambda ) $ is a stochastic process with non-correlated increments. In particular, it follows that

$$ \tag{6 } B ( s) = \int\limits _ {- \infty } ^ \infty e ^ {i t \lambda } d F ( \lambda ) , $$

where $ F ( \lambda ) $ is the monotone non-decreasing spectral function of $ X ( t) $( cf. Spectral function of a stationary stochastic process). The spectral decompositions (5) and (6) lie at the heart of the solution of problems of best (in the sense of minimal mean-square error) linear extrapolation, interpolation and filtering of stationary stochastic processes.

The mathematical theory of stochastic processes also includes a large number of results related to a series of subclasses or, conversely, of extensions, of the above classes of stochastic processes (see Markov chain; Diffusion process; Branching process; Martingale; Stochastic process with stationary increments; etc.).

References

[Sl] E.E. Slutskii, Selected works , Moscow (1980) pp. 269–280 (In Russian)
[Do] J.L. Doob, "Stochastic processes" , Wiley (1953) MR1570654 MR0058896 Zbl 0053.26802
[GS] I.I. Gihman, A.V. Skorohod, "Introduction to the theory of stochastic processes" , Saunders (1967) (Translated from Russian)
[GS2] I.I. Gihman, A.V. Skorohod, "Theory of stochastic processes" , 1–3 , Springer (1974–1979) (Translated from Russian) MR0636254 MR0651015 MR0375463 MR0350794 MR0346882 Zbl 0531.60002 Zbl 0531.60001 Zbl 0404.60061 Zbl 0305.60027 Zbl 0291.60019
[CL] H. Cramér, M.R. Leadbetter, "Stationary and related stochastic processes" , Wiley (1967) MR0217860 Zbl 0162.21102
[We] A.D. Wentzell, "A course in the theory of stochastic processes" , McGraw-Hill (1981) (Translated from Russian) MR0781738 MR0614594 Zbl 0502.60001
[Rz] Yu.A. Rozanov, "Stochastic processes" , 1–2 , Moscow (1960–1963) (In Russian)
[Sk] A.V. Skorohod, "Random processes with independent increments" , Kluwer (1991) (Translated from Russian) MR1155400
[Dy] E.B. Dynkin, "Markov processes" , 1–2 , Springer (1965) (Translated from Russian) MR0193671 Zbl 0132.37901
[IR] I.A. Ibragimov, Yu.A. Rozanov, "Gaussian stochastic processes" , Springer (1978) (Translated from Russian) MR0272040
[Rz2] Yu.A. Rozanov, "Stationary stochastic processes" , Holden-Day (1967) (Translated from Russian) MR0159363 MR0114252 Zbl 0721.60040

Comments

The state space $ E $ of a stochastic process $ X $ may be a (good) topological space without algebraic structure as in Markov process theory; in this case real processes of the form $ f \circ X $, where $ f $ is a real function on $ E $, are considered; it can be also a differentiable manifold, as in modern diffusion process theory, etc. Concerning the regularity properties of the paths, often it is not possible to prove that the considered set of regular paths has probability 1 because this set is not measurable, but it is often possible to circumvent this difficulty by proving that the outer probability is $ 1 $.

References

[B] N.T.J. Bailey, "The elements of stochastic processes" , Wiley (1964) MR0165572 Zbl 0127.11203
[C] K.L. Chung, "Lectures from Markov processes to Brownian motion" , Springer (1982) MR0648601 Zbl 0503.60073
[CM] D.R. Cox, H.D. Miller, "The theory of stochastic processes" , Methuen (1965) MR0192521 Zbl 0149.12902
[IC] R. Iranpour, P. Chacon, "Basic stochastic processes" , The Marc Kac lectures , Macmillan (1988) MR0965763 Zbl 0681.60035
[K] N.G. van Kampen, "Stochastic processes in physics and chemistry" , North-Holland (1981) Zbl 0511.60038
[L] P. Lévy, "Processus stochastiques et mouvement Brownien" , Gauthier-Villars (1965) MR0190953 Zbl 0137.11602
[P] E. Parzen, "Stochastic processes" , Holden-Day (1967) MR1699272 MR1532996 MR0139192 MR0095531 MR0084899 Zbl 0932.60001 Zbl 0107.12301 Zbl 0079.34601
[Rs] M. Rosenblatt, "Random processes" , Springer (1974) MR0346883 Zbl 0287.60031
[Wa] N. Wax (ed.), Selected papers on noise and stochastic processes , Dover, reprint (1954) Zbl 0059.11903
[Wo] E. Wong, "Stochastic processes in information and dynamical systems" , McGraw-Hill (1971) MR0415698 Zbl 0245.60001
[E] K. Ethier, "Markov processes" , Wiley (1986) MR0838085 Zbl 0592.60049
[Du] R. Durrett, "Brownian motion and martingales in analysis" , Wadsworth (1984) MR0750829 Zbl 0554.60075
How to Cite This Entry:
Stochastic process. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stochastic_process&oldid=24340
This article was adapted from an original article by A.M. Yaglom (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article