Namespaces
Variants
Actions

Difference between revisions of "Markov process"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (MR/ZBL numbers added)
m (typo)
(7 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
<!--
 +
m0624901.png
 +
$#A+1 = 265 n = 0
 +
$#C+1 = 265 : ~/encyclopedia/old_files/data/M062/M.0602490 Markov process,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''process without after-effects''
 
''process without after-effects''
  
Line 5: Line 17:
 
[[Category:Markov processes]]
 
[[Category:Markov processes]]
  
A [[Stochastic process|stochastic process]] whose evolution after a given time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m0624901.png" /> does not depend on the evolution before <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m0624902.png" />, given that the value of the process at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m0624903.png" /> is fixed (briefly; the "future" and "past" of the process are independent of each other for a known "present" ).
+
A [[Stochastic process|stochastic process]] whose evolution after a given time $  t $
 +
does not depend on the evolution before $  t $,  
 +
given that the value of the process at $  t $
 +
is fixed (briefly; the "future" and "past" of the process are independent of each other for a known "present" ).
  
The defining property of a Markov process is commonly called the [[Markov property|Markov property]]; it was first stated by A.A. Markov . However, in the work of L. Bachelier
+
The defining property of a Markov process is commonly called the [[Markov property|Markov property]]; it was first stated by A.A. Markov . However, in the work of L. Bachelier it is already possible to find an attempt to discuss [[Brownian motion|Brownian motion]] as a Markov process, an attempt which received justification later in the research of N. Wiener (1923). The basis of the general theory of continuous-time Markov processes was laid by A.N. Kolmogorov .
 
 
it is already possible to find an attempt to discuss [[Brownian motion|Brownian motion]] as a Markov process, an attempt which received justification later in the research of N. Wiener (1923). The basis of the general theory of continuous-time Markov processes was laid by A.N. Kolmogorov .
 
  
 
==The Markov property.==
 
==The Markov property.==
There are essentially distinct definitions of a Markov process. One of the more widely used is the following. On a [[Probability space|probability space]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m0624904.png" /> let there be given a stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m0624905.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m0624906.png" />, taking values in a [[Measurable space|measurable space]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m0624907.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m0624908.png" /> is a subset of the real line <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m0624909.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249010.png" /> (respectively, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249011.png" />) be the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249012.png" />-algebra in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249013.png" /> generated by the variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249014.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249015.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249016.png" />), where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249017.png" />. In other words, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249018.png" /> (respectively, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249019.png" />) is the collection of events connected with the evolution of the process up to time (starting from time) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249020.png" />. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249021.png" /> is called a Markov process if (almost certainly) for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249022.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249023.png" /> the Markov property
+
There are essentially distinct definitions of a Markov process. One of the more widely used is the following. On a [[Probability space|probability space]] $  ( \Omega , F , {\mathsf P} ) $
 +
let there be given a stochastic process $  X ( t) $,  
 +
$  t \in T $,  
 +
taking values in a [[Measurable space|measurable space]] $  ( E , {\mathcal B} ) $,  
 +
where $  T $
 +
is a subset of the real line $  \mathbf R $.  
 +
Let $  N _ {t} $(
 +
respectively, $  N  ^ {t} $)  
 +
be the $  \sigma $-
 +
algebra in $  \Omega $
 +
generated by the variables $  X ( s) $
 +
for $  s \leq  t $(
 +
$  s \geq  t $),  
 +
where $  s \in T $.  
 +
In other words, $  N _ {t} $(
 +
respectively, $  N  ^ {t} $)  
 +
is the collection of events connected with the evolution of the process up to time (starting from time) $  t $.  
 +
$  X ( t) $
 +
is called a Markov process if (almost certainly) for all $  t \in T $,
 +
$  \Lambda _ {1} , \Lambda _ {2} \in N  ^ {t} $
 +
the Markov property
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249024.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
{\mathsf P} \{ \Lambda _ {1} \Lambda _ {2} \mid  X ( t) \}  = \
 +
{\mathsf P} \{ \Lambda _ {1} \mid  X ( t) \} {\mathsf P}
 +
\{ \Lambda _ {2} \mid  X ( t) \}
 +
$$
  
holds, or, what is the same, if for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249025.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249026.png" />,
+
holds, or, what is the same, if for any $  t \in T $
 +
and $  \Lambda \in N  ^ {t} $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249027.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
{\mathsf P} \{ \Lambda \mid  N _ {t} \}  = \
 +
{\mathsf P} \{ \Lambda \mid  X ( t) \} .
 +
$$
  
A Markov process for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249028.png" /> is contained in the natural numbers is called a [[Markov chain|Markov chain]] (however, the latter term is mostly associated with the case of an at most countable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249029.png" />). If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249030.png" /> is an interval in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249031.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249032.png" /> is at most countable, a Markov process is called a continuous-time Markov chain. Examples of continuous-time Markov processes are furnished by diffusion processes (cf. [[Diffusion process|Diffusion process]]) and processes with independent increments (cf. [[Stochastic process with independent increments|Stochastic process with independent increments]]), including Poisson and Wiener processes (cf. [[Poisson process|Poisson process]]; [[Wiener process|Wiener process]]).
+
A Markov process for which $  T $
 +
is contained in the natural numbers is called a [[Markov chain|Markov chain]] (however, the latter term is mostly associated with the case of an at most countable $  E $).  
 +
If $  T $
 +
is an interval in $  \mathbf R $
 +
and $  E $
 +
is at most countable, a Markov process is called a continuous-time Markov chain. Examples of continuous-time Markov processes are furnished by diffusion processes (cf. [[Diffusion process|Diffusion process]]) and processes with independent increments (cf. [[Stochastic process with independent increments|Stochastic process with independent increments]]), including Poisson and Wiener processes (cf. [[Poisson process|Poisson process]]; [[Wiener process|Wiener process]]).
  
In what follows the discussion will concern only the case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249033.png" />, for the sake of being specific. The formulas (1) and (2) give an explicit interpretation of the principle of independence of "past" and "future" events when the "present" is known, but the definition of Markov process based on them has proved to be insufficiently flexible in the numerous situations where one is obliged to consider not one, but a collection of conditions of the type (1) or (2) corresponding to different, but in some sense consistent, measures <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249034.png" />. Such reasoning has led to the acceptance of the following definitions (see , ).
+
In what follows the discussion will concern only the case $  T = [ 0 , \infty ) $,
 +
for the sake of being specific. The formulas (1) and (2) give an explicit interpretation of the principle of independence of "past" and "future" events when the "present" is known, but the definition of Markov process based on them has proved to be insufficiently flexible in the numerous situations where one is obliged to consider not one, but a collection of conditions of the type (1) or (2) corresponding to different, but in some sense consistent, measures $  {\mathsf P} $.  
 +
Such reasoning has led to the acceptance of the following definitions (see , ).
  
 
Suppose one is given:
 
Suppose one is given:
  
a) a measurable space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249035.png" />, where the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249036.png" />-algebra <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249037.png" /> contains all one-point sets in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249038.png" />;
+
a) a measurable space $  ( E , {\mathcal B} ) $,
 +
where the $  \sigma $-
 +
algebra $  {\mathcal B} $
 +
contains all one-point sets in $  E $;
  
b) a measurable space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249039.png" />, equipped with a family of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249040.png" />-algebras <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249041.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249042.png" />, such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249043.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249044.png" />;
+
b) a measurable space $  ( \Omega , F  ) $,  
 +
equipped with a family of $  \sigma $-
 +
algebras $  F _ {t} ^ { s } \subset  F $,  
 +
0 \leq  s \leq  t \leq  \infty $,  
 +
such that $  F _ {t} ^ { s } \subset  F _ {v} ^ { u } $
 +
if $  [ s , t ] \subset  [ u , v ] $;
  
c) a function ( "trajectory" ) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249045.png" />, defining for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249046.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249047.png" /> a [[Measurable mapping|measurable mapping]] from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249048.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249049.png" />;
+
c) a function ( "trajectory" ) $  x _ {t} = x _ {t} ( \omega ) $,  
 +
defining for $  t \in [ 0 , \infty ) $
 +
and $  v \in [ 0 , t ] $
 +
a [[Measurable mapping|measurable mapping]] from $  ( \Omega , F _ {t} ^ { v } ) $
 +
to $  ( E , {\mathcal B} ) $;
  
d) for each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249050.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249051.png" /> a [[Probability measure|probability measure]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249052.png" /> on the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249053.png" />-algebra <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249054.png" /> such that the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249055.png" /> is measurable with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249056.png" />, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249057.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249058.png" />.
+
d) for each $  s \geq  0 $
 +
and $  x \in E $
 +
a [[Probability measure|probability measure]] $  {\mathsf P} _ {s,x} $
 +
on the $  \sigma $-
 +
algebra $  F _  \infty  ^ { s } $
 +
such that the function $  {\mathsf P} ( s , \cdot ;  t , B ) = {\mathsf P} _ {s , \cdot }  \{ x _ {t} \in B \} $
 +
is measurable with respect to $  {\mathcal B} $,  
 +
if $  s \in [ 0 , t ] $
 +
and $  B \in {\mathcal B} $.
  
The collection <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249059.png" /> is called a (non-terminating) Markov process given on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249060.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249061.png" />-almost certainly
+
The collection $  X ( t) = ( x _ {t} , F _ {t} ^ { s } , {\mathsf P} _ {s , x }  ) $
 +
is called a (non-terminating) Markov process given on $  ( E , {\mathcal B} ) $
 +
if $  {\mathsf P} _ {s,x} $-
 +
almost certainly
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249062.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
{\mathsf P} _ {s,x} \{ \Lambda \mid  F _ {t} ^ { s } \}
 +
= {\mathsf P} _ {t , x _ {t}  } \{ \Lambda \} ,
 +
$$
  
for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249063.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249064.png" />. Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249065.png" /> is the space of elementary events, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249066.png" /> is the phase space or state space and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249067.png" /> is the [[Transition function|transition function]] or transition probability of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249068.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249069.png" /> is endowed with a topology and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249070.png" /> is the collection of Borel sets in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249071.png" />, then it is commonly said that the Markov process is given on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249072.png" />. Usually included in the definition of a Markov process is the requirement that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249073.png" />, and then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249074.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249075.png" />, is interpreted as the probability of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249076.png" /> under the condition that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249077.png" />.
+
for any 0 \leq  s \leq  t $
 +
and $  \Lambda \in N  ^ {t} $.  
 +
Here $  \Omega $
 +
is the space of elementary events, $  ( E , {\mathcal B} ) $
 +
is the phase space or state space and $  P ( s , x ;  t , B ) $
 +
is the [[Transition function|transition function]] or transition probability of $  X ( t) $.  
 +
If $  E $
 +
is endowed with a topology and $  {\mathcal B} $
 +
is the collection of Borel sets in $  E $,  
 +
then it is commonly said that the Markov process is given on $  E $.  
 +
Usually included in the definition of a Markov process is the requirement that $  P ( s , x ;  s , \{ x \} ) \equiv 1 $,  
 +
and then $  {\mathsf P} _ {s,x} \{ \Lambda \} $,  
 +
$  \Lambda \in F _  \infty  ^ { s } $,  
 +
is interpreted as the probability of $  \Lambda $
 +
under the condition that $  x _ {s} = x $.
  
The following question arises: Is every Markov transition function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249078.png" />, given on a measurable space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249079.png" />, the transition function of some Markov process? The answer is affirmative if, for example, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249080.png" /> is a separable, locally compact space and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249081.png" /> is the family of Borel sets in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249082.png" />. In addition, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249083.png" /> be a complete metric space and let
+
The following question arises: Is every Markov transition function $  P ( s, x ;  t , B ) $,  
 +
given on a measurable space $  ( E , {\mathcal B} ) $,  
 +
the transition function of some Markov process? The answer is affirmative if, for example, $  E $
 +
is a separable, locally compact space and $  {\mathcal B} $
 +
is the family of Borel sets in $  E $.  
 +
In addition, let $  E $
 +
be a complete metric space and let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249084.png" /></td> </tr></table>
+
$$
 +
\lim\limits _ {h \downarrow 0 }  \alpha _  \epsilon  ( h)  = 0
 +
$$
  
for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249085.png" />, where
+
for any $  \epsilon > 0 $,  
 +
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249086.png" /></td> </tr></table>
+
$$
 +
\alpha _  \epsilon  ( h)  = \
 +
\sup \{ {P ( s , x ; t , V _  \epsilon  ( x) ) } : {
 +
x \in E , 0 < t - s < h } \}
 +
$$
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249087.png" /> is the complement of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249088.png" />-neighbourhood of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249089.png" />. Then the corresponding Markov process can be taken to be right-continuous and having left limits (that is, its trajectories can be chosen so). The existence of a continuous Markov process is guaranteed by the condition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249090.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249091.png" /> (see , ).
+
and $  V _  \epsilon  ( x) $
 +
is the complement of the $  \epsilon $-
 +
neighbourhood of $  x $.  
 +
Then the corresponding Markov process can be taken to be right-continuous and having left limits (that is, its trajectories can be chosen so). The existence of a continuous Markov process is guaranteed by the condition $  \alpha _  \epsilon  ( h) = o ( h) $
 +
as $  h \downarrow 0 $(
 +
see , ).
  
In the theory of Markov processes most attention is given to homogeneous (in time) processes. The corresponding definition assumes one is given a system of objects a)–d) with the difference that the parameters <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249092.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249093.png" /> may now only take the value 0. Even the notation can be simplified:
+
In the theory of Markov processes most attention is given to homogeneous (in time) processes. The corresponding definition assumes one is given a system of objects a)–d) with the difference that the parameters $  s $
 +
and $  u $
 +
may now only take the value 0. Even the notation can be simplified:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249094.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} _ {x}  = \
 +
{\mathsf P} _ {0x} ,\ \
 +
F _ {t}  = F _ {t} ^ { 0 } ,\ \
 +
P ( t , x , B )  = P
 +
( 0 , x ; t , B ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249095.png" /></td> </tr></table>
+
$$
 +
x  \in  E ,\  t  \geq  0 ,\  B  \in  {\mathcal B} .
 +
$$
  
Subsequently, homogeneity of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249096.png" /> is assumed. That is, it is required that for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249097.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249098.png" /> there is an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m06249099.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490100.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490101.png" />. Because of this, on the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490102.png" />-algebra <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490103.png" />, the smallest <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490104.png" />-algebra in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490105.png" /> containing the events <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490106.png" />, the time shift operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490107.png" /> are defined, which preserve the operations of union, intersection and difference of sets, and for which
+
Subsequently, homogeneity of $  \Omega $
 +
is assumed. That is, it is required that for any $  \omega \in \Omega $
 +
and $  s \geq  0 $
 +
there is an $  \omega  ^  \prime  \in \Omega $
 +
such that $  x _ {t} ( \omega  ^  \prime  ) = x _ {t+} s ( \omega ) $
 +
for $  t \geq  0 $.  
 +
Because of this, on the $  \sigma $-
 +
algebra $  N $,  
 +
the smallest $  \sigma $-
 +
algebra in $  \Omega $
 +
containing the events $  \{  \omega  : {x _ {s} \in B } \} $,  
 +
the time shift operators $  \theta _ {t} $
 +
are defined, which preserve the operations of union, intersection and difference of sets, and for which
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490108.png" /></td> </tr></table>
+
$$
 +
\theta _ {t} \{  \omega  : {x _ {s} \in B } \}
 +
= \{  \omega  : {x _ {t+} s
 +
\in B } \}
 +
,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490109.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490110.png" />.
+
where $  s , t \geq  0 $,  
 +
$  B \in {\mathcal B} $.
  
The collection <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490111.png" /> is called a (non-terminating) homogeneous Markov process given on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490112.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490113.png" />-almost certainly
+
The collection $  X ( t) = ( x _ {t} , F _ {t} , {\mathsf P} _ {x} ) $
 +
is called a (non-terminating) homogeneous Markov process given on $  ( E , {\mathcal B} ) $
 +
if $  {\mathsf P} _ {x} $-
 +
almost certainly
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490114.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
{\mathsf P} _ {x} \{ \theta _ {t} \Lambda \mid  F _ {t} \}  = {\mathsf P} _ {x _ {t}  } \{ \Lambda \}
 +
$$
  
for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490115.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490116.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490117.png" />. The transition function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490118.png" /> is taken to be <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490119.png" />, where, unless otherwise indicated, it is required that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490120.png" />. It is useful to bear in mind that in the verification of (4) it is only necessary to consider sets of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490121.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490122.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490123.png" />, and in (4), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490124.png" /> may always replaced by the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490125.png" />-algebra <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490126.png" /> equal to the intersection of the completions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490127.png" /> relative to all possible measures <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490128.png" />. Often, one fixes on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490129.png" /> a probability measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490130.png" /> (the "initial distribution" ) and considers a random Markov function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490131.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490132.png" /> is the measure on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490133.png" /> given by
+
for $  x \in E $,  
 +
$  t \geq  0 $
 +
and $  \Lambda \in N $.  
 +
The transition function of $  X ( t) $
 +
is taken to be $  P ( t , x , B ) $,  
 +
where, unless otherwise indicated, it is required that $  P ( 0 , x , \{ x \} ) \equiv 1 $.  
 +
It is useful to bear in mind that in the verification of (4) it is only necessary to consider sets of the form $  \Lambda = \{  \omega  : {x _ {s} \in B } \} $,  
 +
where $  s \geq  0 $,  
 +
$  B \in {\mathcal B} $,  
 +
and in (4), $  F _ {t} $
 +
may always replaced by the $  \sigma $-
 +
algebra $  \overline{F}\; _ {t} $
 +
equal to the intersection of the completions of $  F _ {t} $
 +
relative to all possible measures $  {\mathsf P} _ {x} \{ x \in B \} $.  
 +
Often, one fixes on $  {\mathcal B} $
 +
a probability measure $  \mu $(
 +
the "initial distribution" ) and considers a random Markov function $  ( x _ {t} , F _ {t} , {\mathsf P} _  \mu  ) $,  
 +
where $  {\mathsf P} _  \mu  $
 +
is the measure on $  F _  \infty  $
 +
given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490134.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} _  \mu  \{ \cdot \}  = \int\limits {\mathsf P} _ {x} \{ \cdot \} \mu ( d x ) .
 +
$$
  
A Markov process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490135.png" /> is called progressively measurable if for each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490136.png" /> the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490137.png" /> induces a measurable mapping from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490138.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490139.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490140.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490141.png" />-algebra of Borel subsets of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490142.png" />. A right-continuous Markov process is progressively measurable. There is a method for reducing the non-homogeneous case to the homogeneous case (see ), and in what follows homogeneous Markov processes will be discussed.
+
A Markov process $  X ( t) = ( x _ {t} , F _ {t} , {\mathsf P} _ {x} ) $
 +
is called progressively measurable if for each $  t > 0 $
 +
the function $  x( s, \omega ) = x _ {s} ( \omega ) $
 +
induces a measurable mapping from $  ( [ 0 , t ] \times \Omega , {\mathcal B} _ {t} \times F _ {t} ) $
 +
to $  ( E , {\mathcal B} ) $,  
 +
where $  {\mathcal B} _ {t} $
 +
is the $  \sigma $-
 +
algebra of Borel subsets of $  [ 0 , t ] $.  
 +
A right-continuous Markov process is progressively measurable. There is a method for reducing the non-homogeneous case to the homogeneous case (see ), and in what follows homogeneous Markov processes will be discussed.
  
 
==The strong Markov property.==
 
==The strong Markov property.==
Suppose that, on a measurable space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490143.png" />, a Markov process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490144.png" /> is given. A function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490145.png" /> is called a [[Markov moment|Markov moment]] (stopping time) if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490146.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490147.png" />. Here a set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490148.png" /> is considered in the family <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490149.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490150.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490151.png" /> (most often <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490152.png" /> is interpreted as the family of events connected with the evolution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490153.png" /> up to time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490154.png" />). For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490155.png" />, set
+
Suppose that, on a measurable space $  ( E , {\mathcal B} ) $,  
 +
a Markov process $  X ( t) = ( x _ {t} , F _ {t} , {\mathsf P} _ {x} ) $
 +
is given. A function $  \tau :  \Omega \rightarrow [ 0 , \infty ] $
 +
is called a [[Markov moment|Markov moment]] (stopping time) if $  \{  \omega  : {\tau \leq  t } \} \in F _ {t} $
 +
for $  t \geq  0 $.  
 +
Here a set $  \Lambda \subset  \Omega _  \tau  = \{  \omega  : {\tau < \infty } \} $
 +
is considered in the family $  F _  \tau  $
 +
if  $  \Lambda _  \cap  \{  \omega  : {\tau < t } \} \in F _ {t} $
 +
for $  t \geq  0 $(
 +
most often $  F _  \tau  $
 +
is interpreted as the family of events connected with the evolution of $  X ( t) $
 +
up to time $  \tau $).  
 +
For $  \Lambda \in N $,  
 +
set
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490156.png" /></td> </tr></table>
+
$$
 +
\theta _  \tau  \Lambda  = \
 +
\cup _ {t \geq  0 }
 +
[ \theta _ {t} \Lambda \cap \{  \omega  : {\tau = t } \}
 +
] .
 +
$$
  
A progressively-measurable Markov process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490157.png" /> is called a strong Markov process if for any Markov moment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490158.png" /> and all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490159.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490160.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490161.png" />, the relation
+
A progressively-measurable Markov process $  X $
 +
is called a strong Markov process if for any Markov moment $  \tau $
 +
and all $  t \geq  0 $,  
 +
$  x \in E $
 +
and $  \Lambda \in N $,  
 +
the relation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490162.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
{\mathsf P} _ {x} \{ \theta _  \tau  \Lambda \mid  F _  \tau  \}  = \
 +
{\mathsf P} _ {x _  \tau  } \{ \Lambda \}
 +
$$
  
(the strong Markov property) is satisfied <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490163.png" />-almost certainly in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490164.png" />. In the verification of (5) it suffices to consider only sets of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490165.png" /> where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490166.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490167.png" />; in this case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490168.png" />. For example, any right-continuous Feller–Markov process on a topological space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490169.png" /> is a strong Markov process. A Markov process is called a Feller–Markov process if the function
+
(the strong Markov property) is satisfied $  {\mathsf P} _ {x} $-
 +
almost certainly in $  \Omega _  \tau  $.  
 +
In the verification of (5) it suffices to consider only sets of the form $  \Lambda = \{  \omega  : {x _ {s} \in B } \} $
 +
where $  s \geq  0 $,  
 +
$  B \in {\mathcal B} $;  
 +
in this case $  \theta _  \tau  \Lambda = \{  \omega  : {x _ {s + \tau }  \in B } \} $.  
 +
For example, any right-continuous Feller–Markov process on a topological space $  E $
 +
is a strong Markov process. A Markov process is called a Feller–Markov process if the function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490170.png" /></td> </tr></table>
+
$$
 +
P  ^ {t} f ( \cdot )  = \int\limits f ( y) P ( t , \cdot , d y )
 +
$$
  
is continuous whenever <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490171.png" /> is continuous and bounded.
+
is continuous whenever $  f $
 +
is continuous and bounded.
  
In the case of strong Markov processes various subclasses have been distinguished. Let the Markov transition function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490172.png" />, given on a locally compact metric space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490173.png" />, be stochastically continuous:
+
In the case of strong Markov processes various subclasses have been distinguished. Let the Markov transition function $  P ( t , x , B ) $,
 +
given on a locally compact metric space $  E $,  
 +
be stochastically continuous:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490174.png" /></td> </tr></table>
+
$$
 +
\lim\limits _
 +
{t \downarrow 0 } \
 +
P ( t , x , U )  = 1
 +
$$
  
for any neighbourhood <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490175.png" /> of each point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490176.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490177.png" /> maps the class of continuous functions that vanish at infinity into itself, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490178.png" /> corresponds to a standard Markov process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490179.png" />. That is, a right-continuous strong Markov process for which: 1) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490180.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490181.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490182.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490183.png" />; 2) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490184.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490185.png" />-almost certainly on the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490186.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490187.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490188.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490189.png" />) are Markov moments that are non-decreasing as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490190.png" /> increases.
+
for any neighbourhood $  U $
 +
of each point $  x \in B $.  
 +
If $  P  ^ {t} $
 +
maps the class of continuous functions that vanish at infinity into itself, then $  P ( t , x , B ) $
 +
corresponds to a standard Markov process $  X $.  
 +
That is, a right-continuous strong Markov process for which: 1) $  F _ {t} = \overline{F}\; _ {t} $
 +
for $  t \in [ 0 , \infty ) $
 +
and $  F _ {t} = \cap _ {s>} t F _ {s} $
 +
for $  t \in [ 0 , \infty ) $;  
 +
2) $  \lim\limits _ {n \rightarrow \infty }  x _ {\tau _ {n}  } = x _  \tau  $,  
 +
$  P _ {x} $-
 +
almost certainly on the set $  \{  \omega  : {\tau < \infty } \} $,  
 +
where $  \tau = \lim\limits _ {n \rightarrow \infty }  \tau _ {n} $
 +
and $  \tau _ {n} $(
 +
$  n \geq  1 $)  
 +
are Markov moments that are non-decreasing as $  n $
 +
increases.
  
 
==Terminating Markov processes.==
 
==Terminating Markov processes.==
 
Frequently, a physical system can be best described using a non-terminating Markov process, but only in a time interval of random length. In addition, even simple transformations of a Markov process may lead to processes with trajectories given on random intervals (see [[Functional of a Markov process|Functional of a Markov process]]). Guided by these considerations one introduces the notion of a terminating Markov process.
 
Frequently, a physical system can be best described using a non-terminating Markov process, but only in a time interval of random length. In addition, even simple transformations of a Markov process may lead to processes with trajectories given on random intervals (see [[Functional of a Markov process|Functional of a Markov process]]). Guided by these considerations one introduces the notion of a terminating Markov process.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490191.png" /> be a homogeneous Markov process in a phase space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490192.png" />, having a transition function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490193.png" />, and let there be a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490194.png" /> and a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490195.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490196.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490197.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490198.png" /> otherwise (unless stated otherwise, take <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490199.png" />). A new trajectory <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490200.png" /> is given for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490201.png" /> by the equality <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490202.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490203.png" /> is defined as the trace of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490204.png" /> on the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490205.png" />.
+
Let $  \widetilde{X}  ( t) = ( \widetilde{X}  _ {t} , \widetilde{F}  _ {t} , \widetilde {\mathsf P}  _ {x} ) $
 +
be a homogeneous Markov process in a phase space $  ( \widetilde{E}  ,  {\mathcal B}  tilde ) $,  
 +
having a transition function $  \widetilde{P}  ( t , x , N ) $,  
 +
and let there be a point $  e \in \widetilde{E}  $
 +
and a function $  \zeta :  \Omega \rightarrow [ 0 , \infty ) $
 +
such that $  \widetilde{x}  _ {t} ( \omega ) = e $
 +
for $  \zeta ( \omega ) \leq  t $
 +
and $  \widetilde{x}  _ {t} ( \omega ) \neq e $
 +
otherwise (unless stated otherwise, take $  \zeta > 0 $).  
 +
A new trajectory $  x _ {t} ( \omega ) $
 +
is given for $  t \in [ 0 , \zeta ( \omega ) ) $
 +
by the equality $  x _ {t} ( \omega ) = \widetilde{x}  _ {t} ( \omega ) $,  
 +
and $  F _ {t} $
 +
is defined as the trace of $  \widetilde{F}  _ {t} $
 +
on the set $  \{  \omega  : {\zeta > t } \} $.
  
The collection <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490206.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490207.png" />, is called the terminating Markov process obtained from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490208.png" /> by censoring (or killing) at the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490209.png" />. The variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490210.png" /> is called the censoring time or lifetime of the terminating Markov process. The phase space of the new process is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490211.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490212.png" /> is the trace of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490213.png" />-algebra <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490214.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490215.png" />. The transition function of a terminating Markov process is the restriction of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490216.png" /> to the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490217.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490218.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490219.png" />. The process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490220.png" /> is called a strong Markov process or a standard Markov process if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490221.png" /> has the corresponding property. A non-terminating Markov process can be considered as a terminating Markov process with censoring time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490222.png" />. A non-homogeneous terminating Markov process is defined similarly.
+
The collection $  X ( t) = ( x _ {t} , \zeta , F _ {t} , \widetilde {\mathsf P}  _ {x} ) $,  
 +
where $  x \in E = \widetilde{E}  \setminus  \{ e \} $,  
 +
is called the terminating Markov process obtained from $  \widetilde{X}  ( t) $
 +
by censoring (or killing) at the time $  \zeta $.  
 +
The variable $  \zeta $
 +
is called the censoring time or lifetime of the terminating Markov process. The phase space of the new process is $  ( E , {\mathcal B} ) $,  
 +
where $  {\mathcal B} $
 +
is the trace of the $  \sigma $-
 +
algebra $  {\mathcal B}  tilde $
 +
in $  E $.  
 +
The transition function of a terminating Markov process is the restriction of $  \widetilde{P}  ( t , x , B ) $
 +
to the set $  t \geq  0 $,  
 +
$  x \in E $,  
 +
$  B \subset  {\mathcal B} $.  
 +
The process $  X ( t) $
 +
is called a strong Markov process or a standard Markov process if $  \widetilde{X}  ( t) $
 +
has the corresponding property. A non-terminating Markov process can be considered as a terminating Markov process with censoring time $  \zeta \equiv \infty $.  
 +
A non-homogeneous terminating Markov process is defined similarly.
  
 
''M.G. Shur''
 
''M.G. Shur''
  
 
==Markov processes and differential equations.==
 
==Markov processes and differential equations.==
A Markov process of Brownian-motion type is closely connected with partial differential equations of parabolic type. The transition density <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490223.png" /> of a diffusion process satisfies, under certain additional assumptions, the backward and forward Kolmogorov equations (cf. [[Kolmogorov equation|Kolmogorov equation]]):
+
A Markov process of Brownian-motion type is closely connected with partial differential equations of parabolic type. The transition density $  p ( s , x , t , y ) $
 +
of a diffusion process satisfies, under certain additional assumptions, the backward and forward Kolmogorov equations (cf. [[Kolmogorov equation|Kolmogorov equation]]):
 +
 
 +
$$ \tag{6 }
 +
 
 +
\frac{\partial  p }{\partial  s }
 +
 
 +
+ \sum _ { k= } 1 ^ { n }
 +
a _ {k} ( s , x )
 +
\frac{\partial  p }{\partial  x _ {k} }
 +
+
 +
 
 +
\frac{1}{2}
 +
 
 +
\sum _ {k , j = 1 } ^ { n }
 +
b _ {kj} ( s , x )
 +
 
 +
\frac{\partial  ^ {2} p }{\partial  x _ {k} \partial  x _ {j} }
 +
=
 +
$$
 +
 
 +
$$
 +
= \
 +
 
 +
\frac{\partial  p }{\partial  s }
 +
+ L ( s , x ) p  =  0 ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490224.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{7 }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490225.png" /></td> </tr></table>
+
\frac{\partial  p }{\partial  t }
 +
  = - \sum _ { k= } 1 ^ { n } 
 +
\frac \partial {\partial  y _ {k} }
 +
( a _ {k} ( t , y ) p ) +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490226.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$
 +
+
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490227.png" /></td> </tr></table>
+
\frac{1}{2}
 +
\sum _ {k , j = 1 } ^ { n } 
 +
\frac{\partial  ^ {2} }{
 +
\partial  y _ {k} \partial  y _ {j} }
 +
( b _ {kj} ( t , y ) p )  = L  ^ {*} ( t , y ) p .
 +
$$
  
The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490228.png" /> is the Green's function of the equations (6)–(7), and the first known methods for constructing diffusion processes were based on existence theorems for this function for the partial differential equations (6)–(7). For a time-homogeneous process the operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490229.png" /> coincides on smooth functions with the infinitesimal operator of the Markov process (see [[Transition-operator semi-group|Transition-operator semi-group]]).
+
The function $  p ( s , x , t , y ) $
 +
is the Green's function of the equations (6)–(7), and the first known methods for constructing diffusion processes were based on existence theorems for this function for the partial differential equations (6)–(7). For a time-homogeneous process the operator $  L ( s , x ) = L ( x ) $
 +
coincides on smooth functions with the infinitesimal operator of the Markov process (see [[Transition-operator semi-group|Transition-operator semi-group]]).
  
The expectations of various functionals of diffusion processes are solutions of boundary value problems for the differential equation . Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490230.png" /> be the expectation with respect to the measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490231.png" />. Then the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490232.png" /> satisfies (6) for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490233.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490234.png" />.
+
The expectations of various functionals of diffusion processes are solutions of boundary value problems for the differential equation . Let $  {\mathsf E} _ {s , x }  ( \cdot ) $
 +
be the expectation with respect to the measure $  {\mathsf P} _ {s , x }  $.  
 +
Then the function $  {\mathsf E} _ {s ,x }  \phi ( X ( T) ) = u _ {1} ( s , x ) $
 +
satisfies (6) for $  s < T $
 +
and $  u _ {1} ( T , x ) = \phi ( x ) $.
  
 
Similarly, the function
 
Similarly, the function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490235.png" /></td> </tr></table>
+
$$
 +
u _ {2} ( s , x )  = \
 +
{\mathsf E} _ {s , x }
 +
\int\limits _ { s } ^ { T }
 +
f ( t , X ( t) )  dt
 +
$$
  
satisfies, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490236.png" />,
+
satisfies, for $  s < T $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490237.png" /></td> </tr></table>
+
$$
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490238.png" />.
+
\frac{\partial  u _ {2} }{\partial  s }
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490239.png" /> be the time at which the trajectories of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490240.png" /> first hit the boundary <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490241.png" /> of a domain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490242.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490243.png" />. Then, under certain conditions, the function
+
+ L ( s , x ) u _ {2}  = \
 +
- f ( s , x ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490244.png" /></td> </tr></table>
+
and  $  u _ {2} ( T , x ) = 0 $.
 +
 
 +
Let  $  \tau $
 +
be the time at which the trajectories of  $  X ( t) $
 +
first hit the boundary  $  \partial  D $
 +
of a domain  $  D \subset  \mathbf R  ^ {n} $,
 +
and let  $  \tau \wedge T = \min ( \tau , T ) $.  
 +
Then, under certain conditions, the function
 +
 
 +
$$
 +
u _ {3} ( s , x )  = \
 +
{\mathsf E} _ {s , x }
 +
\int\limits _ { s } ^ {  \tau  \wedge T }
 +
f ( t , X ( t) )  dt +
 +
{\mathsf E} _ {s , x }
 +
\phi ( \tau \wedge T , X ( \tau \wedge T ))
 +
$$
  
 
satisfies
 
satisfies
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490245.png" /></td> </tr></table>
+
$$
 +
 
 +
\frac{\partial  u }{\partial  s }
 +
 
 +
+ L ( s , x ) u  = - f
 +
$$
  
and takes the value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490246.png" /> on the set
+
and takes the value $  \phi $
 +
on the set
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490247.png" /></td> </tr></table>
+
$$
 +
\Gamma  = \
 +
\{ s < T , x \in \partial  D \} \cup
 +
\{ s = T , x \in D \} .
 +
$$
  
 
The solution of the first boundary value problem for a general second-order linear parabolic equation
 
The solution of the first boundary value problem for a general second-order linear parabolic equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490248.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
$$ \tag{8 }
 +
\left .
 +
 
 +
\begin{array}{c}
 +
 
 +
\frac{\partial  u }{\partial  s }
 +
 
 +
+ L ( s , x ) u + c ( s , x ) u  = - f ( s ,x ) ,  \\
 +
u \mid  _  \Gamma  =  \phi ,  \\
 +
\end{array}
 +
 
 +
\right \}
 +
$$
  
 
can, under fairly general assumptions, be described in the form
 
can, under fairly general assumptions, be described in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490249.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
$$ \tag{9 }
 +
u ( s , x )  = \
 +
{\mathsf E} _ {s , x }
 +
\int\limits _ { s } ^ {  \tau  \wedge T }
 +
\mathop{\rm exp}
 +
\left \{
 +
\int\limits _ { s } ^ { v }
 +
c ( t , X ( t) )  dt
 +
\right \}
 +
f ( v , X ( v) ) dv +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490250.png" /></td> </tr></table>
+
$$
 +
+
 +
{\mathsf E} _ {s , x }  \left \{  \mathop{\rm exp}
 +
\left \{ \int\limits _ { s } ^ {  \tau  \wedge T } c ( t , X ( t) ) \
 +
dt \right \} \phi ( \tau \wedge T , X ( \tau \wedge T ) ) \right \} .
 +
$$
  
When the operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490251.png" /> and the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490252.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490253.png" /> do not depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490254.png" />, a representation similar to (9) is possible also for the solution of a linear elliptic equation. More precisely, the function
+
When the operator $  L $
 +
and the functions $  c $
 +
and $  f $
 +
do not depend on $  s $,  
 +
a representation similar to (9) is possible also for the solution of a linear elliptic equation. More precisely, the function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490255.png" /></td> <td valign="top" style="width:5%;text-align:right;">(10)</td></tr></table>
+
$$ \tag{10 }
 +
u ( x)  = \
 +
{\mathsf E} _ {x} \int\limits _ { 0 } ^  \tau 
 +
\mathop{\rm exp} \left \{
 +
\int\limits _ { 0 } ^ { v }
 +
c ( X ( t) )  dt \right \}
 +
f ( X ( v) ) dv +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490256.png" /></td> </tr></table>
+
$$
 +
+
 +
{\mathsf E} _ {x} \left \{  \mathop{\rm exp} \left \{ \int\limits _ { 0 } ^  \tau  c ( X ( t) )  dt \right \} \phi ( X ( \tau ) ) \right \}
 +
$$
  
 
is, under certain assumptions, the solution of
 
is, under certain assumptions, the solution of
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490257.png" /></td> <td valign="top" style="width:5%;text-align:right;">(11)</td></tr></table>
+
$$ \tag{11 }
 +
L ( x ) u + c ( x) u  = \
 +
- f ( x) ,\ \
 +
u \mid  _ {\partial  D }  = \phi .
 +
$$
  
When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490258.png" /> is degenerate <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490259.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490260.png" /> is not sufficiently "smooth" , the boundary values need not be taken by the functions (9), (10) at individual points or on whole sets. The notion of a [[Regular boundary point|regular boundary point]] for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490261.png" /> has a probabilistic interpretation. At regular points the boundary values are attained by (9), (10). The solution of (8) and (11) allows one to study the properties of the corresponding diffusion processes and functionals of them.
+
When $  L $
 +
is degenerate $  (  \mathop{\rm det}  b ( s , x ) = 0 ) $
 +
or $  \partial  D $
 +
is not sufficiently "smooth" , the boundary values need not be taken by the functions (9), (10) at individual points or on whole sets. The notion of a [[Regular boundary point|regular boundary point]] for $  L $
 +
has a probabilistic interpretation. At regular points the boundary values are attained by (9), (10). The solution of (8) and (11) allows one to study the properties of the corresponding diffusion processes and functionals of them.
  
 
There are methods for constructing Markov processes which do not rely on the construction of solutions of (6) and (7). For example, the method of stochastic differential equations (cf. [[Stochastic differential equation|Stochastic differential equation]]), of absolutely-continuous change of measure, etc. This situation, together with the formulas (9) and (10), gives a probabilistic route to the construction and study of the properties of boundary value problems for (8) and also to the study of properties of the solutions of the corresponding elliptic equation.
 
There are methods for constructing Markov processes which do not rely on the construction of solutions of (6) and (7). For example, the method of stochastic differential equations (cf. [[Stochastic differential equation|Stochastic differential equation]]), of absolutely-continuous change of measure, etc. This situation, together with the formulas (9) and (10), gives a probabilistic route to the construction and study of the properties of boundary value problems for (8) and also to the study of properties of the solutions of the corresponding elliptic equation.
  
Since the solution of a stochastic differential equation is insensitive to degeneracy of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490262.png" />, probabilistic methods can be applied to construct solutions of degenerate elliptic and parabolic differential equations. The extension of the averaging principle of N.M. Krylov and N.N. Bogolyubov to stochastic differential equations allows one, with the help of (9), to obtain corresponding results for elliptic and parabolic differential equations. It turns out that certain difficult problems in the investigation of properties of solutions of equations of this type with small parameters in front of the highest derivatives can be solved by probabilistic arguments. Even the solution of the second boundary value problem for (6) has a probabilistic meaning. The formulation of boundary value problems for unbounded domains is closely connected with recurrence in the corresponding diffusion process.
+
Since the solution of a stochastic differential equation is insensitive to degeneracy of $  b ( s , x ) $,
 +
probabilistic methods can be applied to construct solutions of degenerate elliptic and parabolic differential equations. The extension of the averaging principle of N.M. Krylov and N.N. Bogolyubov to stochastic differential equations allows one, with the help of (9), to obtain corresponding results for elliptic and parabolic differential equations. It turns out that certain difficult problems in the investigation of properties of solutions of equations of this type with small parameters in front of the highest derivatives can be solved by probabilistic arguments. Even the solution of the second boundary value problem for (6) has a probabilistic meaning. The formulation of boundary value problems for unbounded domains is closely connected with recurrence in the corresponding diffusion process.
  
In the case of a time-homogeneous process (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490263.png" /> is independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490264.png" />), a positive solution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062490/m062490265.png" /> coincides, under certain assumptions and up to a multiplicative constant, with the stationary density of the distribution of a Markov chain. Probabilistic arguments turn out to be useful even for boundary value problems for non-linear parabolic equations.
+
In the case of a time-homogeneous process ( $  L $
 +
is independent of $  s $),  
 +
a positive solution of $  L  ^ {*} q = 0 $
 +
coincides, under certain assumptions and up to a multiplicative constant, with the stationary density of the distribution of a Markov chain. Probabilistic arguments turn out to be useful even for boundary value problems for non-linear parabolic equations.
  
 
''R.Z. Khas'minskii''
 
''R.Z. Khas'minskii''
Line 172: Line 557:
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> A.A. Markov, ''Izv. Fiz.-Mat. Obshch. Kazan. Univ.'' , '''15''' : 4 (1906) pp. 135–156</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> L. Bachelier, ''Ann. Sci. Ecole Norm. Sup.'' , '''17''' (1900) pp. 21–86</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> A.N. Kolmogorov, "Ueber die analytischen Methoden in der Wahrscheinlichkeitsrechnung" ''Math. Ann.'' , '''104''' (1931) pp. 415–458</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1967) {{MR|0217872}} {{ZBL|0146.38401}} </TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> W. Feller, "The general diffusion operator and positivity-preserving semi-groups in one dimension" ''Ann. of Math.'' , '''60''' (1954) pp. 417–436 {{MR|0065809}} {{ZBL|}} </TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> E.B. Dynkin, A.A. Yushkevich, "Strong Markov processs" ''Theor. Veroyatnost. i Primenen.'' , '''1''' : 1 (1956) pp. 149–155 (In Russian) (English abstract)</TD></TR><TR><TD valign="top">[7a]</TD> <TD valign="top"> G.A. Hunt, "Markov processes and potentials I" ''Illinois J. Math.'' , '''1''' (1957) pp. 44–93</TD></TR><TR><TD valign="top">[7b]</TD> <TD valign="top"> G.A. Hunt, "Markov processes and potentials II" ''Illinois J. Math.'' , '''1''' (1957) pp. 316–369</TD></TR><TR><TD valign="top">[7c]</TD> <TD valign="top"> G.A. Hunt, "Markov processes and potentials III" ''Illinois J. Math.'' , '''2''' (1958) pp. 151–213</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top"> C. Dellacherie, "Capacités et processus stochastiques" , Springer (1972) {{MR|0448504}} {{ZBL|0246.60032}} </TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top"> E.B. Dynkin, "Theory of Markov processes" , Pergamon (1960) (Translated from Russian) {{MR|2305744}} {{MR|1009436}} {{MR|0245096}} {{MR|1531923}} {{MR|0131900}} {{MR|0131898}} {{ZBL|1116.60001}} {{ZBL|0689.60003}} {{ZBL|0222.60048}} {{ZBL|0091.13605}} </TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top"> E.B. Dynkin, "Markov processes" , '''1''' , Springer (1965) (Translated from Russian) {{MR|0193671}} {{ZBL|0132.37901}} </TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top"> I.I. [I.I. Gikhman] Gihman, A.V. [A.V. Skorokhod] Skorohod, "The theory of stochastic processes" , '''2''' , Springer (1979) (Translated from Russian) {{MR|0651014}} {{MR|0651015}} {{ZBL|0404.60061}} </TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top"> M.I. Freidlin, "Markov processes and differential equations" ''Itogi Nauk. Teor. Veroyatnost. Mat. Statist. Teoret. Kibernet. 1966'' (1967) pp. 7–58 (In Russian) {{MR|0235308}} {{ZBL|0863.60049}} </TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top"> R.Z. Khas'minskii, "Principle of averaging for parabolic and elliptic partial differential equations and for Markov processes with small diffusion" ''Theor. Probab. Appl.'' , '''8''' (1963) pp. 1–21 ''Teor. Veroyatnost. i Primenen.'' , '''8''' : 1 (1963) pp. 3–25</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top"> A.D. Venttsel', M.I. Freidlin, "Random perturbations of dynamical systems" , Springer (1984) (Translated from Russian) {{MR|722136}} {{ZBL|}} </TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top"> R.M. Blumenthal, R.K. Getoor, "Markov processes and potential theory" , Acad. Press (1968) {{MR|0264757}} {{ZBL|0169.49204}} </TD></TR><TR><TD valign="top">[16]</TD> <TD valign="top"> R.K. Getoor, "Markov processes: Ray processes and right processes" , ''Lect. notes in math.'' , '''440''' , Springer (1975) {{MR|0405598}} {{ZBL|0299.60051}} </TD></TR><TR><TD valign="top">[17]</TD> <TD valign="top"> S.E. Kuznetsov, "Any Markov process in a Borel space has a transition function" ''Theor. Prob. Appl.'' , '''25''' (1980) pp. 384–388 ''Teor. Veroyatnost. i Primenen.'' , '''25''' : 2 (1980) pp. 389–393 {{MR|0572574}} {{ZBL|0456.60077}} {{ZBL|0431.60071}} </TD></TR></table>
+
{|
 +
|valign="top"|{{Ref|M}}|| A.A. Markov, "?", ''Izv. Fiz.-Mat. Obshch. Kazan. Univ.'' , '''15''' : 4 (1906) pp. 135–156
 +
|-
 +
|valign="top"|{{Ref|B}}|| L. Bachelier, "?", ''Ann. Sci. Ecole Norm. Sup.'' , '''17''' (1900) pp. 21–86
 +
|-
 +
|valign="top"|{{Ref|Ko}}|| A.N. Kolmogorov, "Ueber die analytischen Methoden in der Wahrscheinlichkeitsrechnung" ''Math. Ann.'' , '''104''' (1931) pp. 415–458
 +
|-
 +
|valign="top"|{{Ref|C}}|| K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1967) {{MR|0217872}} {{ZBL|0146.38401}}
 +
|-
 +
|valign="top"|{{Ref|F}}|| W. Feller, "The general diffusion operator and positivity-preserving semi-groups in one dimension" ''Ann. of Math.'' , '''60''' (1954) pp. 417–436 {{MR|0065809}} {{ZBL|}}
 +
|-
 +
|valign="top"|{{Ref|DY}}|| E.B. Dynkin, A.A. Yushkevich, "Strong Markov processs" ''Theor. Veroyatnost. i Primenen.'' , '''1''' : 1 (1956) pp. 149–155 (In Russian) (English abstract)
 +
|-
 +
|valign="top"|{{Ref|H}}|| G.A. Hunt, "Markov processes and potentials I" ''Illinois J. Math.'' , '''1''' (1957) pp. 44–93
 +
|-
 +
|valign="top"|{{Ref|H2}}|| G.A. Hunt, "Markov processes and potentials II" ''Illinois J. Math.'' , '''1''' (1957) pp. 316–369
 +
|-
 +
|valign="top"|{{Ref|H3}}|| G.A. Hunt, "Markov processes and potentials III" ''Illinois J. Math.'' , '''2''' (1958) pp. 151–213
 +
|-
 +
|valign="top"|{{Ref|De}}|| C. Dellacherie, "Capacités et processus stochastiques" , Springer (1972) {{MR|0448504}} {{ZBL|0246.60032}}
 +
|-
 +
|valign="top"|{{Ref|Dy}}|| E.B. Dynkin, "Theory of Markov processes" , Pergamon (1960) (Translated from Russian) {{MR|2305744}} {{MR|1009436}} {{MR|0245096}} {{MR|1531923}} {{MR|0131900}} {{MR|0131898}} {{ZBL|1116.60001}} {{ZBL|0689.60003}} {{ZBL|0222.60048}} {{ZBL|0091.13605}}
 +
|-
 +
|valign="top"|{{Ref|D2}}|| E.B. Dynkin, "Markov processes" , '''1''' , Springer (1965) (Translated from Russian) {{MR|0193671}} {{ZBL|0132.37901}}
 +
|-
 +
|valign="top"|{{Ref|GS}}|| I.I. Gihman, A.V. Skorohod, "The theory of stochastic processes" , '''2''' , Springer (1979) (Translated from Russian) {{MR|0651014}} {{MR|0651015}} {{ZBL|0404.60061}}
 +
|-
 +
|valign="top"|{{Ref|Fr}}|| M.I. Freidlin, "Markov processes and differential equations" ''Itogi Nauk. Teor. Veroyatnost. Mat. Statist. Teoret. Kibernet. 1966'' (1967) pp. 7–58 (In Russian) {{MR|0235308}} {{ZBL|0863.60049}}
 +
|-
 +
|valign="top"|{{Ref|Kh}}|| R.Z. Khas'minskii, "Principle of averaging for parabolic and elliptic partial differential equations and for Markov processes with small diffusion" ''Theor. Probab. Appl.'' , '''8''' (1963) pp. 1–21 ''Teor. Veroyatnost. i Primenen.'' , '''8''' : 1 (1963) pp. 3–25
 +
|-
 +
|valign="top"|{{Ref|VF}}|| A.D. Venttsel', M.I. Freidlin, "Random perturbations of dynamical systems" , Springer (1984) (Translated from Russian) {{MR|722136}} {{ZBL|}}
 +
|-
 +
|valign="top"|{{Ref|BG}}|| R.M. Blumenthal, R.K. Getoor, "Markov processes and potential theory" , Acad. Press (1968) {{MR|0264757}} {{ZBL|0169.49204}}
 +
|-
 +
|valign="top"|{{Ref|G}}|| R.K. Getoor, "Markov processes: Ray processes and right processes" , ''Lect. notes in math.'' , '''440''' , Springer (1975) {{MR|0405598}} {{ZBL|0299.60051}}
 +
|-
 +
|valign="top"|{{Ref|Kuz}}|| S.E. Kuznetsov, "Any Markov process in a Borel space has a transition function" ''Theor. Prob. Appl.'' , '''25''' (1980) pp. 384–388 ''Teor. Veroyatnost. i Primenen.'' , '''25''' : 2 (1980) pp. 389–393 {{MR|0572574}} {{ZBL|0456.60077}} {{ZBL|0431.60071}}
 +
|}
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> D.W. Stroock, S.R.S. Varadhan, "Multidimensional diffusion processes" , Springer (1979) {{MR|0532498}} {{ZBL|0426.60069}} </TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> K.L. Chung, "Lectures from Markov processes to Brownian motion" , Springer (1982) {{MR|0648601}} {{ZBL|0503.60073}} </TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> J.L. Doob, "Stochastic processes" , Wiley (1953) {{MR|1570654}} {{MR|0058896}} {{ZBL|0053.26802}} </TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> A.D. [A.D. Ventsel'] Wentzell, "A course in the theory of stochastic processes" , McGraw-Hill (1981) (Translated from Russian) {{MR|0781738}} {{MR|0614594}} {{ZBL|0502.60001}} </TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> T.G. Kurtz, "Markov processes" , Wiley (1986) {{MR|0838085}} {{ZBL|0592.60049}} </TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> W. Feller, "An introduction to probability theory and its applications" , '''1–2''' , Wiley (1966) {{MR|0210154}} {{ZBL|0138.10207}} </TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top"> N. Wax (ed.) , ''Selected papers on noise and stochastic processes'' , Dover, reprint (1954) {{MR|}} {{ZBL|0059.11903}} </TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top"> M. Kac, "Probability and related topics in physical sciences" , Interscience (1959) pp. Chapt. 4 {{MR|0102849}} {{ZBL|0087.33003}} </TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top"> P. Lévy, "Processus stochastiques et mouvement Brownien" , Gauthier-Villars (1965) {{MR|0190953}} {{ZBL|0137.11602}} </TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top"> M. Loève, "Probability theory" , '''II''' , Springer (1978) {{MR|0651017}} {{MR|0651018}} {{ZBL|0385.60001}} </TD></TR></table>
+
{|
 +
|valign="top"|{{Ref|SV}}|| D.W. Stroock, S.R.S. Varadhan, "Multidimensional diffusion processes" , Springer (1979) {{MR|0532498}} {{ZBL|0426.60069}}
 +
|-
 +
|valign="top"|{{Ref|C2}}|| K.L. Chung, "Lectures from Markov processes to Brownian motion" , Springer (1982) {{MR|0648601}} {{ZBL|0503.60073}}
 +
|-
 +
|valign="top"|{{Ref|Do}}|| J.L. Doob, "Stochastic processes" , Wiley (1953) {{MR|1570654}} {{MR|0058896}} {{ZBL|0053.26802}}
 +
|-
 +
|valign="top"|{{Ref|We}}|| A.D. Wentzell, "A course in the theory of stochastic processes" , McGraw-Hill (1981) (Translated from Russian) {{MR|0781738}} {{MR|0614594}} {{ZBL|0502.60001}}
 +
|-
 +
|valign="top"|{{Ref|Kur}}|| T.G. Kurtz, "Markov processes" , Wiley (1986) {{MR|0838085}} {{ZBL|0592.60049}}
 +
|-
 +
|valign="top"|{{Ref|F2}}|| W. Feller, "An introduction to probability theory and its applications" , '''1–2''' , Wiley (1966) {{MR|0210154}} {{ZBL|0138.10207}}
 +
|-
 +
|valign="top"|{{Ref|Wa}}|| N. Wax (ed.), ''Selected papers on noise and stochastic processes'' , Dover, reprint (1954) {{MR|}} {{ZBL|0059.11903}}
 +
|-
 +
|valign="top"|{{Ref|Ka}}|| M. Kac, "Probability and related topics in physical sciences" , Interscience (1959) pp. Chapt. 4 {{MR|0102849}} {{ZBL|0087.33003}}
 +
|-
 +
|valign="top"|{{Ref|Le}}|| P. Lévy, "Processus stochastiques et mouvement Brownien" , Gauthier-Villars (1965) {{MR|0190953}} {{ZBL|0137.11602}}
 +
|-
 +
|valign="top"|{{Ref|Lo}}|| M. Loève, "Probability theory" , '''II''' , Springer (1978) {{MR|0651017}} {{MR|0651018}} {{ZBL|0385.60001}}
 +
|}

Revision as of 10:47, 6 June 2020


process without after-effects

2020 Mathematics Subject Classification: Primary: 60Jxx [MSN][ZBL]

A stochastic process whose evolution after a given time $ t $ does not depend on the evolution before $ t $, given that the value of the process at $ t $ is fixed (briefly; the "future" and "past" of the process are independent of each other for a known "present" ).

The defining property of a Markov process is commonly called the Markov property; it was first stated by A.A. Markov . However, in the work of L. Bachelier it is already possible to find an attempt to discuss Brownian motion as a Markov process, an attempt which received justification later in the research of N. Wiener (1923). The basis of the general theory of continuous-time Markov processes was laid by A.N. Kolmogorov .

The Markov property.

There are essentially distinct definitions of a Markov process. One of the more widely used is the following. On a probability space $ ( \Omega , F , {\mathsf P} ) $ let there be given a stochastic process $ X ( t) $, $ t \in T $, taking values in a measurable space $ ( E , {\mathcal B} ) $, where $ T $ is a subset of the real line $ \mathbf R $. Let $ N _ {t} $( respectively, $ N ^ {t} $) be the $ \sigma $- algebra in $ \Omega $ generated by the variables $ X ( s) $ for $ s \leq t $( $ s \geq t $), where $ s \in T $. In other words, $ N _ {t} $( respectively, $ N ^ {t} $) is the collection of events connected with the evolution of the process up to time (starting from time) $ t $. $ X ( t) $ is called a Markov process if (almost certainly) for all $ t \in T $, $ \Lambda _ {1} , \Lambda _ {2} \in N ^ {t} $ the Markov property

$$ \tag{1 } {\mathsf P} \{ \Lambda _ {1} \Lambda _ {2} \mid X ( t) \} = \ {\mathsf P} \{ \Lambda _ {1} \mid X ( t) \} {\mathsf P} \{ \Lambda _ {2} \mid X ( t) \} $$

holds, or, what is the same, if for any $ t \in T $ and $ \Lambda \in N ^ {t} $,

$$ \tag{2 } {\mathsf P} \{ \Lambda \mid N _ {t} \} = \ {\mathsf P} \{ \Lambda \mid X ( t) \} . $$

A Markov process for which $ T $ is contained in the natural numbers is called a Markov chain (however, the latter term is mostly associated with the case of an at most countable $ E $). If $ T $ is an interval in $ \mathbf R $ and $ E $ is at most countable, a Markov process is called a continuous-time Markov chain. Examples of continuous-time Markov processes are furnished by diffusion processes (cf. Diffusion process) and processes with independent increments (cf. Stochastic process with independent increments), including Poisson and Wiener processes (cf. Poisson process; Wiener process).

In what follows the discussion will concern only the case $ T = [ 0 , \infty ) $, for the sake of being specific. The formulas (1) and (2) give an explicit interpretation of the principle of independence of "past" and "future" events when the "present" is known, but the definition of Markov process based on them has proved to be insufficiently flexible in the numerous situations where one is obliged to consider not one, but a collection of conditions of the type (1) or (2) corresponding to different, but in some sense consistent, measures $ {\mathsf P} $. Such reasoning has led to the acceptance of the following definitions (see , ).

Suppose one is given:

a) a measurable space $ ( E , {\mathcal B} ) $, where the $ \sigma $- algebra $ {\mathcal B} $ contains all one-point sets in $ E $;

b) a measurable space $ ( \Omega , F ) $, equipped with a family of $ \sigma $- algebras $ F _ {t} ^ { s } \subset F $, $ 0 \leq s \leq t \leq \infty $, such that $ F _ {t} ^ { s } \subset F _ {v} ^ { u } $ if $ [ s , t ] \subset [ u , v ] $;

c) a function ( "trajectory" ) $ x _ {t} = x _ {t} ( \omega ) $, defining for $ t \in [ 0 , \infty ) $ and $ v \in [ 0 , t ] $ a measurable mapping from $ ( \Omega , F _ {t} ^ { v } ) $ to $ ( E , {\mathcal B} ) $;

d) for each $ s \geq 0 $ and $ x \in E $ a probability measure $ {\mathsf P} _ {s,x} $ on the $ \sigma $- algebra $ F _ \infty ^ { s } $ such that the function $ {\mathsf P} ( s , \cdot ; t , B ) = {\mathsf P} _ {s , \cdot } \{ x _ {t} \in B \} $ is measurable with respect to $ {\mathcal B} $, if $ s \in [ 0 , t ] $ and $ B \in {\mathcal B} $.

The collection $ X ( t) = ( x _ {t} , F _ {t} ^ { s } , {\mathsf P} _ {s , x } ) $ is called a (non-terminating) Markov process given on $ ( E , {\mathcal B} ) $ if $ {\mathsf P} _ {s,x} $- almost certainly

$$ \tag{3 } {\mathsf P} _ {s,x} \{ \Lambda \mid F _ {t} ^ { s } \} = {\mathsf P} _ {t , x _ {t} } \{ \Lambda \} , $$

for any $ 0 \leq s \leq t $ and $ \Lambda \in N ^ {t} $. Here $ \Omega $ is the space of elementary events, $ ( E , {\mathcal B} ) $ is the phase space or state space and $ P ( s , x ; t , B ) $ is the transition function or transition probability of $ X ( t) $. If $ E $ is endowed with a topology and $ {\mathcal B} $ is the collection of Borel sets in $ E $, then it is commonly said that the Markov process is given on $ E $. Usually included in the definition of a Markov process is the requirement that $ P ( s , x ; s , \{ x \} ) \equiv 1 $, and then $ {\mathsf P} _ {s,x} \{ \Lambda \} $, $ \Lambda \in F _ \infty ^ { s } $, is interpreted as the probability of $ \Lambda $ under the condition that $ x _ {s} = x $.

The following question arises: Is every Markov transition function $ P ( s, x ; t , B ) $, given on a measurable space $ ( E , {\mathcal B} ) $, the transition function of some Markov process? The answer is affirmative if, for example, $ E $ is a separable, locally compact space and $ {\mathcal B} $ is the family of Borel sets in $ E $. In addition, let $ E $ be a complete metric space and let

$$ \lim\limits _ {h \downarrow 0 } \alpha _ \epsilon ( h) = 0 $$

for any $ \epsilon > 0 $, where

$$ \alpha _ \epsilon ( h) = \ \sup \{ {P ( s , x ; t , V _ \epsilon ( x) ) } : { x \in E , 0 < t - s < h } \} $$

and $ V _ \epsilon ( x) $ is the complement of the $ \epsilon $- neighbourhood of $ x $. Then the corresponding Markov process can be taken to be right-continuous and having left limits (that is, its trajectories can be chosen so). The existence of a continuous Markov process is guaranteed by the condition $ \alpha _ \epsilon ( h) = o ( h) $ as $ h \downarrow 0 $( see , ).

In the theory of Markov processes most attention is given to homogeneous (in time) processes. The corresponding definition assumes one is given a system of objects a)–d) with the difference that the parameters $ s $ and $ u $ may now only take the value 0. Even the notation can be simplified:

$$ {\mathsf P} _ {x} = \ {\mathsf P} _ {0x} ,\ \ F _ {t} = F _ {t} ^ { 0 } ,\ \ P ( t , x , B ) = P ( 0 , x ; t , B ) , $$

$$ x \in E ,\ t \geq 0 ,\ B \in {\mathcal B} . $$

Subsequently, homogeneity of $ \Omega $ is assumed. That is, it is required that for any $ \omega \in \Omega $ and $ s \geq 0 $ there is an $ \omega ^ \prime \in \Omega $ such that $ x _ {t} ( \omega ^ \prime ) = x _ {t+} s ( \omega ) $ for $ t \geq 0 $. Because of this, on the $ \sigma $- algebra $ N $, the smallest $ \sigma $- algebra in $ \Omega $ containing the events $ \{ \omega : {x _ {s} \in B } \} $, the time shift operators $ \theta _ {t} $ are defined, which preserve the operations of union, intersection and difference of sets, and for which

$$ \theta _ {t} \{ \omega : {x _ {s} \in B } \} = \{ \omega : {x _ {t+} s \in B } \} , $$

where $ s , t \geq 0 $, $ B \in {\mathcal B} $.

The collection $ X ( t) = ( x _ {t} , F _ {t} , {\mathsf P} _ {x} ) $ is called a (non-terminating) homogeneous Markov process given on $ ( E , {\mathcal B} ) $ if $ {\mathsf P} _ {x} $- almost certainly

$$ \tag{4 } {\mathsf P} _ {x} \{ \theta _ {t} \Lambda \mid F _ {t} \} = {\mathsf P} _ {x _ {t} } \{ \Lambda \} $$

for $ x \in E $, $ t \geq 0 $ and $ \Lambda \in N $. The transition function of $ X ( t) $ is taken to be $ P ( t , x , B ) $, where, unless otherwise indicated, it is required that $ P ( 0 , x , \{ x \} ) \equiv 1 $. It is useful to bear in mind that in the verification of (4) it is only necessary to consider sets of the form $ \Lambda = \{ \omega : {x _ {s} \in B } \} $, where $ s \geq 0 $, $ B \in {\mathcal B} $, and in (4), $ F _ {t} $ may always replaced by the $ \sigma $- algebra $ \overline{F}\; _ {t} $ equal to the intersection of the completions of $ F _ {t} $ relative to all possible measures $ {\mathsf P} _ {x} \{ x \in B \} $. Often, one fixes on $ {\mathcal B} $ a probability measure $ \mu $( the "initial distribution" ) and considers a random Markov function $ ( x _ {t} , F _ {t} , {\mathsf P} _ \mu ) $, where $ {\mathsf P} _ \mu $ is the measure on $ F _ \infty $ given by

$$ {\mathsf P} _ \mu \{ \cdot \} = \int\limits {\mathsf P} _ {x} \{ \cdot \} \mu ( d x ) . $$

A Markov process $ X ( t) = ( x _ {t} , F _ {t} , {\mathsf P} _ {x} ) $ is called progressively measurable if for each $ t > 0 $ the function $ x( s, \omega ) = x _ {s} ( \omega ) $ induces a measurable mapping from $ ( [ 0 , t ] \times \Omega , {\mathcal B} _ {t} \times F _ {t} ) $ to $ ( E , {\mathcal B} ) $, where $ {\mathcal B} _ {t} $ is the $ \sigma $- algebra of Borel subsets of $ [ 0 , t ] $. A right-continuous Markov process is progressively measurable. There is a method for reducing the non-homogeneous case to the homogeneous case (see ), and in what follows homogeneous Markov processes will be discussed.

The strong Markov property.

Suppose that, on a measurable space $ ( E , {\mathcal B} ) $, a Markov process $ X ( t) = ( x _ {t} , F _ {t} , {\mathsf P} _ {x} ) $ is given. A function $ \tau : \Omega \rightarrow [ 0 , \infty ] $ is called a Markov moment (stopping time) if $ \{ \omega : {\tau \leq t } \} \in F _ {t} $ for $ t \geq 0 $. Here a set $ \Lambda \subset \Omega _ \tau = \{ \omega : {\tau < \infty } \} $ is considered in the family $ F _ \tau $ if $ \Lambda _ \cap \{ \omega : {\tau < t } \} \in F _ {t} $ for $ t \geq 0 $( most often $ F _ \tau $ is interpreted as the family of events connected with the evolution of $ X ( t) $ up to time $ \tau $). For $ \Lambda \in N $, set

$$ \theta _ \tau \Lambda = \ \cup _ {t \geq 0 } [ \theta _ {t} \Lambda \cap \{ \omega : {\tau = t } \} ] . $$

A progressively-measurable Markov process $ X $ is called a strong Markov process if for any Markov moment $ \tau $ and all $ t \geq 0 $, $ x \in E $ and $ \Lambda \in N $, the relation

$$ \tag{5 } {\mathsf P} _ {x} \{ \theta _ \tau \Lambda \mid F _ \tau \} = \ {\mathsf P} _ {x _ \tau } \{ \Lambda \} $$

(the strong Markov property) is satisfied $ {\mathsf P} _ {x} $- almost certainly in $ \Omega _ \tau $. In the verification of (5) it suffices to consider only sets of the form $ \Lambda = \{ \omega : {x _ {s} \in B } \} $ where $ s \geq 0 $, $ B \in {\mathcal B} $; in this case $ \theta _ \tau \Lambda = \{ \omega : {x _ {s + \tau } \in B } \} $. For example, any right-continuous Feller–Markov process on a topological space $ E $ is a strong Markov process. A Markov process is called a Feller–Markov process if the function

$$ P ^ {t} f ( \cdot ) = \int\limits f ( y) P ( t , \cdot , d y ) $$

is continuous whenever $ f $ is continuous and bounded.

In the case of strong Markov processes various subclasses have been distinguished. Let the Markov transition function $ P ( t , x , B ) $, given on a locally compact metric space $ E $, be stochastically continuous:

$$ \lim\limits _ {t \downarrow 0 } \ P ( t , x , U ) = 1 $$

for any neighbourhood $ U $ of each point $ x \in B $. If $ P ^ {t} $ maps the class of continuous functions that vanish at infinity into itself, then $ P ( t , x , B ) $ corresponds to a standard Markov process $ X $. That is, a right-continuous strong Markov process for which: 1) $ F _ {t} = \overline{F}\; _ {t} $ for $ t \in [ 0 , \infty ) $ and $ F _ {t} = \cap _ {s>} t F _ {s} $ for $ t \in [ 0 , \infty ) $; 2) $ \lim\limits _ {n \rightarrow \infty } x _ {\tau _ {n} } = x _ \tau $, $ P _ {x} $- almost certainly on the set $ \{ \omega : {\tau < \infty } \} $, where $ \tau = \lim\limits _ {n \rightarrow \infty } \tau _ {n} $ and $ \tau _ {n} $( $ n \geq 1 $) are Markov moments that are non-decreasing as $ n $ increases.

Terminating Markov processes.

Frequently, a physical system can be best described using a non-terminating Markov process, but only in a time interval of random length. In addition, even simple transformations of a Markov process may lead to processes with trajectories given on random intervals (see Functional of a Markov process). Guided by these considerations one introduces the notion of a terminating Markov process.

Let $ \widetilde{X} ( t) = ( \widetilde{X} _ {t} , \widetilde{F} _ {t} , \widetilde {\mathsf P} _ {x} ) $ be a homogeneous Markov process in a phase space $ ( \widetilde{E} , {\mathcal B} tilde ) $, having a transition function $ \widetilde{P} ( t , x , N ) $, and let there be a point $ e \in \widetilde{E} $ and a function $ \zeta : \Omega \rightarrow [ 0 , \infty ) $ such that $ \widetilde{x} _ {t} ( \omega ) = e $ for $ \zeta ( \omega ) \leq t $ and $ \widetilde{x} _ {t} ( \omega ) \neq e $ otherwise (unless stated otherwise, take $ \zeta > 0 $). A new trajectory $ x _ {t} ( \omega ) $ is given for $ t \in [ 0 , \zeta ( \omega ) ) $ by the equality $ x _ {t} ( \omega ) = \widetilde{x} _ {t} ( \omega ) $, and $ F _ {t} $ is defined as the trace of $ \widetilde{F} _ {t} $ on the set $ \{ \omega : {\zeta > t } \} $.

The collection $ X ( t) = ( x _ {t} , \zeta , F _ {t} , \widetilde {\mathsf P} _ {x} ) $, where $ x \in E = \widetilde{E} \setminus \{ e \} $, is called the terminating Markov process obtained from $ \widetilde{X} ( t) $ by censoring (or killing) at the time $ \zeta $. The variable $ \zeta $ is called the censoring time or lifetime of the terminating Markov process. The phase space of the new process is $ ( E , {\mathcal B} ) $, where $ {\mathcal B} $ is the trace of the $ \sigma $- algebra $ {\mathcal B} tilde $ in $ E $. The transition function of a terminating Markov process is the restriction of $ \widetilde{P} ( t , x , B ) $ to the set $ t \geq 0 $, $ x \in E $, $ B \subset {\mathcal B} $. The process $ X ( t) $ is called a strong Markov process or a standard Markov process if $ \widetilde{X} ( t) $ has the corresponding property. A non-terminating Markov process can be considered as a terminating Markov process with censoring time $ \zeta \equiv \infty $. A non-homogeneous terminating Markov process is defined similarly.

M.G. Shur

Markov processes and differential equations.

A Markov process of Brownian-motion type is closely connected with partial differential equations of parabolic type. The transition density $ p ( s , x , t , y ) $ of a diffusion process satisfies, under certain additional assumptions, the backward and forward Kolmogorov equations (cf. Kolmogorov equation):

$$ \tag{6 } \frac{\partial p }{\partial s } + \sum _ { k= } 1 ^ { n } a _ {k} ( s , x ) \frac{\partial p }{\partial x _ {k} } + \frac{1}{2} \sum _ {k , j = 1 } ^ { n } b _ {kj} ( s , x ) \frac{\partial ^ {2} p }{\partial x _ {k} \partial x _ {j} } = $$

$$ = \ \frac{\partial p }{\partial s } + L ( s , x ) p = 0 , $$

$$ \tag{7 } \frac{\partial p }{\partial t } = - \sum _ { k= } 1 ^ { n } \frac \partial {\partial y _ {k} } ( a _ {k} ( t , y ) p ) + $$

$$ + \frac{1}{2} \sum _ {k , j = 1 } ^ { n } \frac{\partial ^ {2} }{ \partial y _ {k} \partial y _ {j} } ( b _ {kj} ( t , y ) p ) = L ^ {*} ( t , y ) p . $$

The function $ p ( s , x , t , y ) $ is the Green's function of the equations (6)–(7), and the first known methods for constructing diffusion processes were based on existence theorems for this function for the partial differential equations (6)–(7). For a time-homogeneous process the operator $ L ( s , x ) = L ( x ) $ coincides on smooth functions with the infinitesimal operator of the Markov process (see Transition-operator semi-group).

The expectations of various functionals of diffusion processes are solutions of boundary value problems for the differential equation . Let $ {\mathsf E} _ {s , x } ( \cdot ) $ be the expectation with respect to the measure $ {\mathsf P} _ {s , x } $. Then the function $ {\mathsf E} _ {s ,x } \phi ( X ( T) ) = u _ {1} ( s , x ) $ satisfies (6) for $ s < T $ and $ u _ {1} ( T , x ) = \phi ( x ) $.

Similarly, the function

$$ u _ {2} ( s , x ) = \ {\mathsf E} _ {s , x } \int\limits _ { s } ^ { T } f ( t , X ( t) ) dt $$

satisfies, for $ s < T $,

$$ \frac{\partial u _ {2} }{\partial s } + L ( s , x ) u _ {2} = \ - f ( s , x ) , $$

and $ u _ {2} ( T , x ) = 0 $.

Let $ \tau $ be the time at which the trajectories of $ X ( t) $ first hit the boundary $ \partial D $ of a domain $ D \subset \mathbf R ^ {n} $, and let $ \tau \wedge T = \min ( \tau , T ) $. Then, under certain conditions, the function

$$ u _ {3} ( s , x ) = \ {\mathsf E} _ {s , x } \int\limits _ { s } ^ { \tau \wedge T } f ( t , X ( t) ) dt + {\mathsf E} _ {s , x } \phi ( \tau \wedge T , X ( \tau \wedge T )) $$

satisfies

$$ \frac{\partial u }{\partial s } + L ( s , x ) u = - f $$

and takes the value $ \phi $ on the set

$$ \Gamma = \ \{ s < T , x \in \partial D \} \cup \{ s = T , x \in D \} . $$

The solution of the first boundary value problem for a general second-order linear parabolic equation

$$ \tag{8 } \left . \begin{array}{c} \frac{\partial u }{\partial s } + L ( s , x ) u + c ( s , x ) u = - f ( s ,x ) , \\ u \mid _ \Gamma = \phi , \\ \end{array} \right \} $$

can, under fairly general assumptions, be described in the form

$$ \tag{9 } u ( s , x ) = \ {\mathsf E} _ {s , x } \int\limits _ { s } ^ { \tau \wedge T } \mathop{\rm exp} \left \{ \int\limits _ { s } ^ { v } c ( t , X ( t) ) dt \right \} f ( v , X ( v) ) dv + $$

$$ + {\mathsf E} _ {s , x } \left \{ \mathop{\rm exp} \left \{ \int\limits _ { s } ^ { \tau \wedge T } c ( t , X ( t) ) \ dt \right \} \phi ( \tau \wedge T , X ( \tau \wedge T ) ) \right \} . $$

When the operator $ L $ and the functions $ c $ and $ f $ do not depend on $ s $, a representation similar to (9) is possible also for the solution of a linear elliptic equation. More precisely, the function

$$ \tag{10 } u ( x) = \ {\mathsf E} _ {x} \int\limits _ { 0 } ^ \tau \mathop{\rm exp} \left \{ \int\limits _ { 0 } ^ { v } c ( X ( t) ) dt \right \} f ( X ( v) ) dv + $$

$$ + {\mathsf E} _ {x} \left \{ \mathop{\rm exp} \left \{ \int\limits _ { 0 } ^ \tau c ( X ( t) ) dt \right \} \phi ( X ( \tau ) ) \right \} $$

is, under certain assumptions, the solution of

$$ \tag{11 } L ( x ) u + c ( x) u = \ - f ( x) ,\ \ u \mid _ {\partial D } = \phi . $$

When $ L $ is degenerate $ ( \mathop{\rm det} b ( s , x ) = 0 ) $ or $ \partial D $ is not sufficiently "smooth" , the boundary values need not be taken by the functions (9), (10) at individual points or on whole sets. The notion of a regular boundary point for $ L $ has a probabilistic interpretation. At regular points the boundary values are attained by (9), (10). The solution of (8) and (11) allows one to study the properties of the corresponding diffusion processes and functionals of them.

There are methods for constructing Markov processes which do not rely on the construction of solutions of (6) and (7). For example, the method of stochastic differential equations (cf. Stochastic differential equation), of absolutely-continuous change of measure, etc. This situation, together with the formulas (9) and (10), gives a probabilistic route to the construction and study of the properties of boundary value problems for (8) and also to the study of properties of the solutions of the corresponding elliptic equation.

Since the solution of a stochastic differential equation is insensitive to degeneracy of $ b ( s , x ) $, probabilistic methods can be applied to construct solutions of degenerate elliptic and parabolic differential equations. The extension of the averaging principle of N.M. Krylov and N.N. Bogolyubov to stochastic differential equations allows one, with the help of (9), to obtain corresponding results for elliptic and parabolic differential equations. It turns out that certain difficult problems in the investigation of properties of solutions of equations of this type with small parameters in front of the highest derivatives can be solved by probabilistic arguments. Even the solution of the second boundary value problem for (6) has a probabilistic meaning. The formulation of boundary value problems for unbounded domains is closely connected with recurrence in the corresponding diffusion process.

In the case of a time-homogeneous process ( $ L $ is independent of $ s $), a positive solution of $ L ^ {*} q = 0 $ coincides, under certain assumptions and up to a multiplicative constant, with the stationary density of the distribution of a Markov chain. Probabilistic arguments turn out to be useful even for boundary value problems for non-linear parabolic equations.

R.Z. Khas'minskii

References to all sections are given below.

References

[M] A.A. Markov, "?", Izv. Fiz.-Mat. Obshch. Kazan. Univ. , 15 : 4 (1906) pp. 135–156
[B] L. Bachelier, "?", Ann. Sci. Ecole Norm. Sup. , 17 (1900) pp. 21–86
[Ko] A.N. Kolmogorov, "Ueber die analytischen Methoden in der Wahrscheinlichkeitsrechnung" Math. Ann. , 104 (1931) pp. 415–458
[C] K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1967) MR0217872 Zbl 0146.38401
[F] W. Feller, "The general diffusion operator and positivity-preserving semi-groups in one dimension" Ann. of Math. , 60 (1954) pp. 417–436 MR0065809
[DY] E.B. Dynkin, A.A. Yushkevich, "Strong Markov processs" Theor. Veroyatnost. i Primenen. , 1 : 1 (1956) pp. 149–155 (In Russian) (English abstract)
[H] G.A. Hunt, "Markov processes and potentials I" Illinois J. Math. , 1 (1957) pp. 44–93
[H2] G.A. Hunt, "Markov processes and potentials II" Illinois J. Math. , 1 (1957) pp. 316–369
[H3] G.A. Hunt, "Markov processes and potentials III" Illinois J. Math. , 2 (1958) pp. 151–213
[De] C. Dellacherie, "Capacités et processus stochastiques" , Springer (1972) MR0448504 Zbl 0246.60032
[Dy] E.B. Dynkin, "Theory of Markov processes" , Pergamon (1960) (Translated from Russian) MR2305744 MR1009436 MR0245096 MR1531923 MR0131900 MR0131898 Zbl 1116.60001 Zbl 0689.60003 Zbl 0222.60048 Zbl 0091.13605
[D2] E.B. Dynkin, "Markov processes" , 1 , Springer (1965) (Translated from Russian) MR0193671 Zbl 0132.37901
[GS] I.I. Gihman, A.V. Skorohod, "The theory of stochastic processes" , 2 , Springer (1979) (Translated from Russian) MR0651014 MR0651015 Zbl 0404.60061
[Fr] M.I. Freidlin, "Markov processes and differential equations" Itogi Nauk. Teor. Veroyatnost. Mat. Statist. Teoret. Kibernet. 1966 (1967) pp. 7–58 (In Russian) MR0235308 Zbl 0863.60049
[Kh] R.Z. Khas'minskii, "Principle of averaging for parabolic and elliptic partial differential equations and for Markov processes with small diffusion" Theor. Probab. Appl. , 8 (1963) pp. 1–21 Teor. Veroyatnost. i Primenen. , 8 : 1 (1963) pp. 3–25
[VF] A.D. Venttsel', M.I. Freidlin, "Random perturbations of dynamical systems" , Springer (1984) (Translated from Russian) MR722136
[BG] R.M. Blumenthal, R.K. Getoor, "Markov processes and potential theory" , Acad. Press (1968) MR0264757 Zbl 0169.49204
[G] R.K. Getoor, "Markov processes: Ray processes and right processes" , Lect. notes in math. , 440 , Springer (1975) MR0405598 Zbl 0299.60051
[Kuz] S.E. Kuznetsov, "Any Markov process in a Borel space has a transition function" Theor. Prob. Appl. , 25 (1980) pp. 384–388 Teor. Veroyatnost. i Primenen. , 25 : 2 (1980) pp. 389–393 MR0572574 Zbl 0456.60077 Zbl 0431.60071

Comments

References

[SV] D.W. Stroock, S.R.S. Varadhan, "Multidimensional diffusion processes" , Springer (1979) MR0532498 Zbl 0426.60069
[C2] K.L. Chung, "Lectures from Markov processes to Brownian motion" , Springer (1982) MR0648601 Zbl 0503.60073
[Do] J.L. Doob, "Stochastic processes" , Wiley (1953) MR1570654 MR0058896 Zbl 0053.26802
[We] A.D. Wentzell, "A course in the theory of stochastic processes" , McGraw-Hill (1981) (Translated from Russian) MR0781738 MR0614594 Zbl 0502.60001
[Kur] T.G. Kurtz, "Markov processes" , Wiley (1986) MR0838085 Zbl 0592.60049
[F2] W. Feller, "An introduction to probability theory and its applications" , 1–2 , Wiley (1966) MR0210154 Zbl 0138.10207
[Wa] N. Wax (ed.), Selected papers on noise and stochastic processes , Dover, reprint (1954) Zbl 0059.11903
[Ka] M. Kac, "Probability and related topics in physical sciences" , Interscience (1959) pp. Chapt. 4 MR0102849 Zbl 0087.33003
[Le] P. Lévy, "Processus stochastiques et mouvement Brownien" , Gauthier-Villars (1965) MR0190953 Zbl 0137.11602
[Lo] M. Loève, "Probability theory" , II , Springer (1978) MR0651017 MR0651018 Zbl 0385.60001
How to Cite This Entry:
Markov process. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_process&oldid=23627