Difference between revisions of "Markov chain, ergodic"
(Importing text file) |
Ulf Rehmann (talk | contribs) m (tex encoded by computer) |
||
Line 1: | Line 1: | ||
− | + | <!-- | |
+ | m0623901.png | ||
+ | $#A+1 = 32 n = 0 | ||
+ | $#C+1 = 32 : ~/encyclopedia/old_files/data/M062/M.0602390 Markov chain, ergodic | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
− | + | {{TEX|auto}} | |
+ | {{TEX|done}} | ||
+ | |||
+ | A homogeneous [[Markov chain|Markov chain]] $ \xi ( t) $ | ||
+ | with the following property: There are quantities (independent of $ i $) | ||
+ | |||
+ | $$ \tag{1 } | ||
+ | p _ {j} = \lim\limits _ {t \rightarrow \infty } p _ {ij} ( t) ,\ \ | ||
+ | \sum _ { j } p _ {j} = 1 , | ||
+ | $$ | ||
where | where | ||
− | + | $$ | |
+ | p _ {ij} ( t) = {\mathsf P} \{ \xi ( t) = j \mid \xi ( 0) = i \} | ||
+ | $$ | ||
− | are the [[Transition probabilities|transition probabilities]]. The distribution | + | are the [[Transition probabilities|transition probabilities]]. The distribution $ \{ p _ {j} \} $ |
+ | on the state space of the chain $ \xi ( t) $ | ||
+ | is called a stationary distribution: If $ {\mathsf P} \{ \xi ( 0) = j \} = p _ {j} $ | ||
+ | for all $ j $, | ||
+ | then $ {\mathsf P} \{ \xi ( t) = j \} = p _ {j} $ | ||
+ | for all $ j $ | ||
+ | and $ t \geq 0 $. | ||
+ | A fundamental property of Markov chains, | ||
− | + | $$ | |
+ | {\mathsf P} \{ \xi ( t) = j \} = \ | ||
+ | \sum _ { i } {\mathsf P} \{ \xi ( 0) = i \} p _ {ij} ( t) , | ||
+ | $$ | ||
− | enables one to find the | + | enables one to find the $ \{ p _ {j} \} $ |
+ | without calculating the limits in (1). | ||
Let | Let | ||
− | + | $$ | |
+ | \tau _ {jj} = \min \ | ||
+ | \{ {t \geq 1 } : {\xi ( t) = j \mid \xi ( 0) = j } \} | ||
+ | $$ | ||
− | be the moment of first return to the state | + | be the moment of first return to the state $ j $( |
+ | for a discrete-time Markov chain), then | ||
− | + | $$ | |
+ | {\mathsf E} \tau _ {jj} = p _ {j} ^ {-} 1 . | ||
+ | $$ | ||
A similar (more complicated) relation holds for a continuous-time Markov chain. | A similar (more complicated) relation holds for a continuous-time Markov chain. | ||
− | The trajectories of an ergodic Markov chain satisfy the ergodic theorem: If | + | The trajectories of an ergodic Markov chain satisfy the ergodic theorem: If $ f ( \cdot ) $ |
+ | is a function on the state space of the chain $ \xi ( t) $, | ||
+ | then, in the discrete-time case, | ||
+ | |||
+ | $$ | ||
+ | {\mathsf P} | ||
+ | \left \{ | ||
+ | \lim\limits _ {n \rightarrow \infty } \ | ||
− | + | \frac{1}{n} | |
− | + | \sum _ { t= } 0 ^ { n } f ( \xi ( t) ) = | |
+ | \sum _ { i } p _ {j} f ( j) | ||
+ | \right \} = 1 , | ||
+ | $$ | ||
− | + | while in the continuous-time case the sum on the left is replaced by an integral. A Markov chain for which there are $ \rho < 1 $ | |
+ | and $ C _ {ij} < \infty $ | ||
+ | such that for all $ i , j , t $, | ||
− | + | $$ \tag{2 } | |
+ | | p _ {ij} ( t) - p _ {j} | \leq C _ {ij} \rho ^ {t} , | ||
+ | $$ | ||
− | A | + | is called geometrically ergodic. A sufficient condition for geometric ergodicity of an ergodic Markov chain is the Doeblin condition (see, for example, [[#References|[1]]]), which for a discrete (finite or countable) Markov chain may be stated as follows: There are an $ n < \infty $ |
+ | and a state $ j $ | ||
+ | such that $ \inf _ {i} p _ {ij} ( n) = \delta > 0 $. | ||
+ | If the Doeblin condition is satisfied, then for the constants in (2) the relation $ \sup _ {i,j} C _ {ij} = C < \infty $ | ||
+ | holds. | ||
− | + | A necessary and sufficient condition for geometric ergodicity of a countable discrete-time Markov chain is the following (see [[#References|[3]]]): There are numbers $ f ( j) $, | |
+ | $ q < 1 $ | ||
+ | and a finite set $ B $ | ||
+ | of states such that: | ||
− | + | $$ | |
+ | {\mathsf E} \{ f ( \xi ( 1) ) \mid \xi ( 0) = i \} \leq q f ( i) ,\ i \notin B , | ||
+ | $$ | ||
+ | |||
+ | $$ | ||
+ | \max _ {i \in B } {\mathsf E} \{ f ( | ||
+ | \xi ( 1) ) \mid \xi ( 0) = i \} < \infty . | ||
+ | $$ | ||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> J.L. Doob, "Stochastic processes" , Wiley (1953)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1967)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> N.N. Popov, "Conditions for geometric ergodicity of countable Markov chains" ''Soviet Math. Dokl.'' , '''18''' : 3 (1977) pp. 676–679 ''Dokl. Akad. Nauk SSSR'' , '''234''' : 2 (1977) pp. 316–319</TD></TR></table> | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> J.L. Doob, "Stochastic processes" , Wiley (1953)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1967)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> N.N. Popov, "Conditions for geometric ergodicity of countable Markov chains" ''Soviet Math. Dokl.'' , '''18''' : 3 (1977) pp. 676–679 ''Dokl. Akad. Nauk SSSR'' , '''234''' : 2 (1977) pp. 316–319</TD></TR></table> | ||
− | |||
− | |||
====Comments==== | ====Comments==== | ||
− | |||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> D. Freedman, "Markov chains" , Holden-Day (1975)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> D. Revuz, "Markov chains" , North-Holland (1975)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> V.I. [V.I. Romanovskii] Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top"> E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981)</TD></TR></table> | <table><TR><TD valign="top">[a1]</TD> <TD valign="top"> D. Freedman, "Markov chains" , Holden-Day (1975)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> D. Revuz, "Markov chains" , North-Holland (1975)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> V.I. [V.I. Romanovskii] Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top"> E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981)</TD></TR></table> |
Latest revision as of 07:59, 6 June 2020
A homogeneous Markov chain $ \xi ( t) $
with the following property: There are quantities (independent of $ i $)
$$ \tag{1 } p _ {j} = \lim\limits _ {t \rightarrow \infty } p _ {ij} ( t) ,\ \ \sum _ { j } p _ {j} = 1 , $$
where
$$ p _ {ij} ( t) = {\mathsf P} \{ \xi ( t) = j \mid \xi ( 0) = i \} $$
are the transition probabilities. The distribution $ \{ p _ {j} \} $ on the state space of the chain $ \xi ( t) $ is called a stationary distribution: If $ {\mathsf P} \{ \xi ( 0) = j \} = p _ {j} $ for all $ j $, then $ {\mathsf P} \{ \xi ( t) = j \} = p _ {j} $ for all $ j $ and $ t \geq 0 $. A fundamental property of Markov chains,
$$ {\mathsf P} \{ \xi ( t) = j \} = \ \sum _ { i } {\mathsf P} \{ \xi ( 0) = i \} p _ {ij} ( t) , $$
enables one to find the $ \{ p _ {j} \} $ without calculating the limits in (1).
Let
$$ \tau _ {jj} = \min \ \{ {t \geq 1 } : {\xi ( t) = j \mid \xi ( 0) = j } \} $$
be the moment of first return to the state $ j $( for a discrete-time Markov chain), then
$$ {\mathsf E} \tau _ {jj} = p _ {j} ^ {-} 1 . $$
A similar (more complicated) relation holds for a continuous-time Markov chain.
The trajectories of an ergodic Markov chain satisfy the ergodic theorem: If $ f ( \cdot ) $ is a function on the state space of the chain $ \xi ( t) $, then, in the discrete-time case,
$$ {\mathsf P} \left \{ \lim\limits _ {n \rightarrow \infty } \ \frac{1}{n} \sum _ { t= } 0 ^ { n } f ( \xi ( t) ) = \sum _ { i } p _ {j} f ( j) \right \} = 1 , $$
while in the continuous-time case the sum on the left is replaced by an integral. A Markov chain for which there are $ \rho < 1 $ and $ C _ {ij} < \infty $ such that for all $ i , j , t $,
$$ \tag{2 } | p _ {ij} ( t) - p _ {j} | \leq C _ {ij} \rho ^ {t} , $$
is called geometrically ergodic. A sufficient condition for geometric ergodicity of an ergodic Markov chain is the Doeblin condition (see, for example, [1]), which for a discrete (finite or countable) Markov chain may be stated as follows: There are an $ n < \infty $ and a state $ j $ such that $ \inf _ {i} p _ {ij} ( n) = \delta > 0 $. If the Doeblin condition is satisfied, then for the constants in (2) the relation $ \sup _ {i,j} C _ {ij} = C < \infty $ holds.
A necessary and sufficient condition for geometric ergodicity of a countable discrete-time Markov chain is the following (see [3]): There are numbers $ f ( j) $, $ q < 1 $ and a finite set $ B $ of states such that:
$$ {\mathsf E} \{ f ( \xi ( 1) ) \mid \xi ( 0) = i \} \leq q f ( i) ,\ i \notin B , $$
$$ \max _ {i \in B } {\mathsf E} \{ f ( \xi ( 1) ) \mid \xi ( 0) = i \} < \infty . $$
References
[1] | J.L. Doob, "Stochastic processes" , Wiley (1953) |
[2] | K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1967) |
[3] | N.N. Popov, "Conditions for geometric ergodicity of countable Markov chains" Soviet Math. Dokl. , 18 : 3 (1977) pp. 676–679 Dokl. Akad. Nauk SSSR , 234 : 2 (1977) pp. 316–319 |
Comments
References
[a1] | D. Freedman, "Markov chains" , Holden-Day (1975) |
[a2] | M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980) |
[a3] | J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960) |
[a4] | J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976) |
[a5] | D. Revuz, "Markov chains" , North-Holland (1975) |
[a6] | V.I. [V.I. Romanovskii] Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian) |
[a7] | E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981) |
Markov chain, ergodic. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_chain,_ergodic&oldid=47767