Namespaces
Variants
Actions

Difference between revisions of "Markov chain, ergodic"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
A homogeneous [[Markov chain|Markov chain]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m0623901.png" /> with the following property: There are quantities (independent of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m0623902.png" />)
+
<!--
 +
m0623901.png
 +
$#A+1 = 32 n = 0
 +
$#C+1 = 32 : ~/encyclopedia/old_files/data/M062/M.0602390 Markov chain, ergodic
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m0623903.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
 +
A homogeneous [[Markov chain|Markov chain]]  $  \xi ( t) $
 +
with the following property: There are quantities (independent of  $  i $)
 +
 
 +
$$ \tag{1 }
 +
p _ {j}  = \lim\limits _ {t \rightarrow \infty }  p _ {ij} ( t) ,\ \
 +
\sum _ { j } p _ {j}  = 1 ,
 +
$$
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m0623904.png" /></td> </tr></table>
+
$$
 +
p _ {ij} ( t)  = {\mathsf P} \{ \xi ( t) = j \mid  \xi ( 0) = i \}
 +
$$
  
are the [[Transition probabilities|transition probabilities]]. The distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m0623905.png" /> on the state space of the chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m0623906.png" /> is called a stationary distribution: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m0623907.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m0623908.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m0623909.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239010.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239011.png" />. A fundamental property of Markov chains,
+
are the [[Transition probabilities|transition probabilities]]. The distribution $  \{ p _ {j} \} $
 +
on the state space of the chain $  \xi ( t) $
 +
is called a stationary distribution: If $  {\mathsf P} \{ \xi ( 0) = j \} = p _ {j} $
 +
for all $  j $,  
 +
then $  {\mathsf P} \{ \xi ( t) = j \} = p _ {j} $
 +
for all $  j $
 +
and $  t \geq  0 $.  
 +
A fundamental property of Markov chains,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239012.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ \xi ( t) = j \}  = \
 +
\sum _ { i } {\mathsf P} \{ \xi ( 0) = i \} p _ {ij} ( t) ,
 +
$$
  
enables one to find the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239013.png" /> without calculating the limits in (1).
+
enables one to find the $  \{ p _ {j} \} $
 +
without calculating the limits in (1).
  
 
Let
 
Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239014.png" /></td> </tr></table>
+
$$
 +
\tau _ {jj}  = \min \
 +
\{ {t \geq  1 } : {\xi ( t) = j \mid  \xi ( 0) = j } \}
 +
$$
  
be the moment of first return to the state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239015.png" /> (for a discrete-time Markov chain), then
+
be the moment of first return to the state $  j $(
 +
for a discrete-time Markov chain), then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239016.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} \tau _ {jj}  = p _ {j}  ^ {-} 1 .
 +
$$
  
 
A similar (more complicated) relation holds for a continuous-time Markov chain.
 
A similar (more complicated) relation holds for a continuous-time Markov chain.
  
The trajectories of an ergodic Markov chain satisfy the ergodic theorem: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239017.png" /> is a function on the state space of the chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239018.png" />, then, in the discrete-time case,
+
The trajectories of an ergodic Markov chain satisfy the ergodic theorem: If $  f ( \cdot ) $
 +
is a function on the state space of the chain $  \xi ( t) $,  
 +
then, in the discrete-time case,
 +
 
 +
$$
 +
{\mathsf P}
 +
\left \{
 +
\lim\limits _ {n \rightarrow \infty } \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239019.png" /></td> </tr></table>
+
\frac{1}{n}
  
while in the continuous-time case the sum on the left is replaced by an integral. A Markov chain for which there are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239020.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239021.png" /> such that for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239022.png" />,
+
\sum _ { t= } 0 ^ { n }  f ( \xi ( t) ) =
 +
\sum _ { i } p _ {j} f ( j)
 +
\right \}  = 1 ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239023.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
while in the continuous-time case the sum on the left is replaced by an integral. A Markov chain for which there are  $  \rho < 1 $
 +
and  $  C _ {ij} < \infty $
 +
such that for all  $  i , j , t $,
  
is called geometrically ergodic. A sufficient condition for geometric ergodicity of an ergodic Markov chain is the Doeblin condition (see, for example, [[#References|[1]]]), which for a discrete (finite or countable) Markov chain may be stated as follows: There are an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239024.png" /> and a state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239025.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239026.png" />. If the Doeblin condition is satisfied, then for the constants in (2) the relation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239027.png" /> holds.
+
$$ \tag{2 }
 +
| p _ {ij} ( t) - p _ {j} |  \leq  C _ {ij} \rho  ^ {t} ,
 +
$$
  
A necessary and sufficient condition for geometric ergodicity of a countable discrete-time Markov chain is the following (see [[#References|[3]]]): There are numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239028.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239029.png" /> and a finite set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239030.png" /> of states such that:
+
is called geometrically ergodic. A sufficient condition for geometric ergodicity of an ergodic Markov chain is the Doeblin condition (see, for example, [[#References|[1]]]), which for a discrete (finite or countable) Markov chain may be stated as follows: There are an  $  n < \infty $
 +
and a state  $  j $
 +
such that  $  \inf _ {i}  p _ {ij} ( n) = \delta > 0 $.  
 +
If the Doeblin condition is satisfied, then for the constants in (2) the relation  $  \sup _ {i,j}  C _ {ij} = C < \infty $
 +
holds.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239031.png" /></td> </tr></table>
+
A necessary and sufficient condition for geometric ergodicity of a countable discrete-time Markov chain is the following (see [[#References|[3]]]): There are numbers  $  f ( j) $,
 +
$  q < 1 $
 +
and a finite set  $  B $
 +
of states such that:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062390/m06239032.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} \{ f ( \xi ( 1) ) \mid  \xi ( 0) = i \}  \leq  q f ( i) ,\  i \notin B ,
 +
$$
 +
 
 +
$$
 +
\max _ {i \in B }  {\mathsf E} \{ f (
 +
\xi ( 1) ) \mid  \xi ( 0) = i \}  < \infty .
 +
$$
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J.L. Doob,  "Stochastic processes" , Wiley  (1953)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  K.L. Chung,  "Markov chains with stationary transition probabilities" , Springer  (1967)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  N.N. Popov,  "Conditions for geometric ergodicity of countable Markov chains"  ''Soviet Math. Dokl.'' , '''18''' :  3  (1977)  pp. 676–679  ''Dokl. Akad. Nauk SSSR'' , '''234''' :  2  (1977)  pp. 316–319</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  J.L. Doob,  "Stochastic processes" , Wiley  (1953)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  K.L. Chung,  "Markov chains with stationary transition probabilities" , Springer  (1967)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  N.N. Popov,  "Conditions for geometric ergodicity of countable Markov chains"  ''Soviet Math. Dokl.'' , '''18''' :  3  (1977)  pp. 676–679  ''Dokl. Akad. Nauk SSSR'' , '''234''' :  2  (1977)  pp. 316–319</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  D. Freedman,  "Markov chains" , Holden-Day  (1975)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  M. Iosifescu,  "Finite Markov processes and their applications" , Wiley  (1980)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  J.G. Kemeny,  J.L. Snell,  "Finite Markov chains" , v. Nostrand  (1960)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  J.G. Kemeny,  J.L. Snell,  A.W. Knapp,  "Denumerable Markov chains" , Springer  (1976)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  D. Revuz,  "Markov chains" , North-Holland  (1975)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  V.I. [V.I. Romanovskii] Romanovsky,  "Discrete Markov chains" , Wolters-Noordhoff  (1970)  (Translated from Russian)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  E. Seneta,  "Non-negative matrices and Markov chains" , Springer  (1981)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  D. Freedman,  "Markov chains" , Holden-Day  (1975)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  M. Iosifescu,  "Finite Markov processes and their applications" , Wiley  (1980)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  J.G. Kemeny,  J.L. Snell,  "Finite Markov chains" , v. Nostrand  (1960)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  J.G. Kemeny,  J.L. Snell,  A.W. Knapp,  "Denumerable Markov chains" , Springer  (1976)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  D. Revuz,  "Markov chains" , North-Holland  (1975)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  V.I. [V.I. Romanovskii] Romanovsky,  "Discrete Markov chains" , Wolters-Noordhoff  (1970)  (Translated from Russian)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  E. Seneta,  "Non-negative matrices and Markov chains" , Springer  (1981)</TD></TR></table>

Latest revision as of 07:59, 6 June 2020


A homogeneous Markov chain $ \xi ( t) $ with the following property: There are quantities (independent of $ i $)

$$ \tag{1 } p _ {j} = \lim\limits _ {t \rightarrow \infty } p _ {ij} ( t) ,\ \ \sum _ { j } p _ {j} = 1 , $$

where

$$ p _ {ij} ( t) = {\mathsf P} \{ \xi ( t) = j \mid \xi ( 0) = i \} $$

are the transition probabilities. The distribution $ \{ p _ {j} \} $ on the state space of the chain $ \xi ( t) $ is called a stationary distribution: If $ {\mathsf P} \{ \xi ( 0) = j \} = p _ {j} $ for all $ j $, then $ {\mathsf P} \{ \xi ( t) = j \} = p _ {j} $ for all $ j $ and $ t \geq 0 $. A fundamental property of Markov chains,

$$ {\mathsf P} \{ \xi ( t) = j \} = \ \sum _ { i } {\mathsf P} \{ \xi ( 0) = i \} p _ {ij} ( t) , $$

enables one to find the $ \{ p _ {j} \} $ without calculating the limits in (1).

Let

$$ \tau _ {jj} = \min \ \{ {t \geq 1 } : {\xi ( t) = j \mid \xi ( 0) = j } \} $$

be the moment of first return to the state $ j $( for a discrete-time Markov chain), then

$$ {\mathsf E} \tau _ {jj} = p _ {j} ^ {-} 1 . $$

A similar (more complicated) relation holds for a continuous-time Markov chain.

The trajectories of an ergodic Markov chain satisfy the ergodic theorem: If $ f ( \cdot ) $ is a function on the state space of the chain $ \xi ( t) $, then, in the discrete-time case,

$$ {\mathsf P} \left \{ \lim\limits _ {n \rightarrow \infty } \ \frac{1}{n} \sum _ { t= } 0 ^ { n } f ( \xi ( t) ) = \sum _ { i } p _ {j} f ( j) \right \} = 1 , $$

while in the continuous-time case the sum on the left is replaced by an integral. A Markov chain for which there are $ \rho < 1 $ and $ C _ {ij} < \infty $ such that for all $ i , j , t $,

$$ \tag{2 } | p _ {ij} ( t) - p _ {j} | \leq C _ {ij} \rho ^ {t} , $$

is called geometrically ergodic. A sufficient condition for geometric ergodicity of an ergodic Markov chain is the Doeblin condition (see, for example, [1]), which for a discrete (finite or countable) Markov chain may be stated as follows: There are an $ n < \infty $ and a state $ j $ such that $ \inf _ {i} p _ {ij} ( n) = \delta > 0 $. If the Doeblin condition is satisfied, then for the constants in (2) the relation $ \sup _ {i,j} C _ {ij} = C < \infty $ holds.

A necessary and sufficient condition for geometric ergodicity of a countable discrete-time Markov chain is the following (see [3]): There are numbers $ f ( j) $, $ q < 1 $ and a finite set $ B $ of states such that:

$$ {\mathsf E} \{ f ( \xi ( 1) ) \mid \xi ( 0) = i \} \leq q f ( i) ,\ i \notin B , $$

$$ \max _ {i \in B } {\mathsf E} \{ f ( \xi ( 1) ) \mid \xi ( 0) = i \} < \infty . $$

References

[1] J.L. Doob, "Stochastic processes" , Wiley (1953)
[2] K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1967)
[3] N.N. Popov, "Conditions for geometric ergodicity of countable Markov chains" Soviet Math. Dokl. , 18 : 3 (1977) pp. 676–679 Dokl. Akad. Nauk SSSR , 234 : 2 (1977) pp. 316–319

Comments

References

[a1] D. Freedman, "Markov chains" , Holden-Day (1975)
[a2] M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980)
[a3] J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960)
[a4] J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976)
[a5] D. Revuz, "Markov chains" , North-Holland (1975)
[a6] V.I. [V.I. Romanovskii] Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian)
[a7] E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981)
How to Cite This Entry:
Markov chain, ergodic. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_chain,_ergodic&oldid=13601
This article was adapted from an original article by A.M. Zubkov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article