Namespaces
Variants
Actions

Difference between revisions of "Markov chain, recurrent"

From Encyclopedia of Mathematics
Jump to: navigation, search
(refs format)
(TeX)
 
Line 1: Line 1:
 
{{MSC|60J10}}
 
{{MSC|60J10}}
 +
{{TEX|done}}
  
 
[[Category:Markov chains]]
 
[[Category:Markov chains]]
  
A [[Markov chain|Markov chain]] in which a random trajectory <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m0624301.png" />, starting at any state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m0624302.png" />, returns to that state with probability 1. In terms of the [[Transition probabilities|transition probabilities]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m0624303.png" />, recurrence of a discrete-time Markov chain is equivalent to the divergence for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m0624304.png" /> of the series
+
A [[Markov chain|Markov chain]] in which a random trajectory $\xi(t)$, starting at any state $\xi(0)=i$, returns to that state with probability 1. In terms of the [[Transition probabilities|transition probabilities]] $p_{ij}(t)$, recurrence of a discrete-time Markov chain is equivalent to the divergence for any $i$ of the series
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m0624305.png" /></td> </tr></table>
+
$$\sum_{t=0}^\infty p_{ij}(t).$$
  
In a recurrent Markov chain a trajectory <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m0624306.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m0624307.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m0624308.png" />, returns infinitely often to the state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m0624309.png" /> with probability 1. In a recurrent Markov chain there are no inessential states and the essential states decompose into recurrent classes. An example of a recurrent Markov chain is the symmetric random walk on the integer lattice on the line or plane. In the symmetric walk on the line a particle moves from position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243010.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243011.png" /> with probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243012.png" />; in the symmetric walk on the plane a particle moves from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243013.png" /> to one of the four points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243015.png" /> with probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243016.png" />. In these examples a particle, starting the walk at an arbitrary point, returns to that point with probability 1. The symmetric walk on the integer lattice in the three-dimensional space, when the probability of transition from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243017.png" /> to a neighbouring point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243018.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243019.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243020.png" /> is equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062430/m06243021.png" />, is not recurrent. In this case the probability of return of the particle to its initial point is approximately 0.35.
+
In a recurrent Markov chain a trajectory $\xi(t)$, $0\leq t<\infty$, $\xi(0)=i$, returns infinitely often to the state $i$ with probability 1. In a recurrent Markov chain there are no inessential states and the essential states decompose into recurrent classes. An example of a recurrent Markov chain is the symmetric random walk on the integer lattice on the line or plane. In the symmetric walk on the line a particle moves from position $x$ to $x\pm1$ with probabilities $1/2$; in the symmetric walk on the plane a particle moves from $(x,y)$ to one of the four points $(x\pm1,y)$, $(x,y\pm1)$ with probabilities $1/4$. In these examples a particle, starting the walk at an arbitrary point, returns to that point with probability 1. The symmetric walk on the integer lattice in the three-dimensional space, when the probability of transition from $(x,y,z)$ to a neighbouring point $(x\pm1,y,z)$, $(x,y\pm1,z)$, $(x,y,z\pm1)$ is equal to $1/6$, is not recurrent. In this case the probability of return of the particle to its initial point is approximately 0.35.
  
 
====References====
 
====References====

Latest revision as of 16:12, 30 July 2014

2020 Mathematics Subject Classification: Primary: 60J10 [MSN][ZBL]

A Markov chain in which a random trajectory $\xi(t)$, starting at any state $\xi(0)=i$, returns to that state with probability 1. In terms of the transition probabilities $p_{ij}(t)$, recurrence of a discrete-time Markov chain is equivalent to the divergence for any $i$ of the series

$$\sum_{t=0}^\infty p_{ij}(t).$$

In a recurrent Markov chain a trajectory $\xi(t)$, $0\leq t<\infty$, $\xi(0)=i$, returns infinitely often to the state $i$ with probability 1. In a recurrent Markov chain there are no inessential states and the essential states decompose into recurrent classes. An example of a recurrent Markov chain is the symmetric random walk on the integer lattice on the line or plane. In the symmetric walk on the line a particle moves from position $x$ to $x\pm1$ with probabilities $1/2$; in the symmetric walk on the plane a particle moves from $(x,y)$ to one of the four points $(x\pm1,y)$, $(x,y\pm1)$ with probabilities $1/4$. In these examples a particle, starting the walk at an arbitrary point, returns to that point with probability 1. The symmetric walk on the integer lattice in the three-dimensional space, when the probability of transition from $(x,y,z)$ to a neighbouring point $(x\pm1,y,z)$, $(x,y\pm1,z)$, $(x,y,z\pm1)$ is equal to $1/6$, is not recurrent. In this case the probability of return of the particle to its initial point is approximately 0.35.

References

[F] W. Feller, "An introduction to probability theory and its applications", 1 , Wiley (1966)

Comments

References

[Fr] D. Freeman, "Markov chains" , Holden-Day (1975)
[I] M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980) MR0587116 Zbl 0436.60001
[KS] J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960) MR1531032 MR0115196 Zbl 0089.13704
[KSK] J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976) MR0407981 Zbl 0348.60090
[Re] D. Revuz, "Markov chains" , North-Holland (1975) MR0415773 Zbl 0332.60045
[Ro] V.I. Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian) MR0266312 Zbl 0201.20002
[Se] E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981) MR2209438 Zbl 0471.60001
[Sp] V. Spitzer, "Principles of random walk" , v. Nostrand (1964) MR0171290 Zbl 0119.34304
How to Cite This Entry:
Markov chain, recurrent. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_chain,_recurrent&oldid=26567
This article was adapted from an original article by B.A. Sevast'yanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article