Namespaces
Variants
Actions

Difference between revisions of "Markov chain"

From Encyclopedia of Mathematics
Jump to: navigation, search
(refs format)
m (Redirected page to Markov Chains)
Line 1: Line 1:
{{MSC|60J10|60J27}}
+
#REDIRECT [[Markov Chains]]
 
 
[[Category:Markov processes]]
 
 
 
A [[Markov process|Markov process]] with finite or countable state space. The theory of Markov chains was created by A.A. Markov who, in 1907, initiated the study of sequences of dependent trials and related sums of random variables {{Cite|M}}.
 
 
 
Let the state space be the set of natural numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623501.png" /> or a finite subset thereof. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623502.png" /> be the state of a Markov chain at time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623503.png" />. The fundamental property of a Markov chain is the [[Markov property|Markov property]], which for a discrete-time Markov chain (that is, when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623504.png" /> takes only non-negative integer values) is defined as follows: For any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623505.png" />, any non-negative integers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623506.png" /> and any natural numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623507.png" />, the equality
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623508.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623509.png" /></td> </tr></table>
 
 
 
holds.
 
 
 
The Markov property (1) can be formulated as follows. The time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235010.png" /> and the related events of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235011.png" /> are called the "present" of the process; events determined by the values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235012.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235013.png" />, are called the "past" of the process; events determined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235015.png" />, are called the "future" of the process. Then (1) is equivalent to the following: For any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235016.png" /> and fixed "present" <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235017.png" />, any "past" event <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235018.png" /> and "future" event <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235019.png" /> are conditionally independent, given the present. That is,
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235020.png" /></td> </tr></table>
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235021.png" /></td> </tr></table>
 
 
 
In the probabilistic description of a Markov chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235022.png" /> the [[Transition probabilities|transition probabilities]]
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235023.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
 
 
 
play a major role. When (2) does not depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235024.png" />, the Markov chain is called homogeneous (in time); otherwise it is called non-homogeneous. Only homogeneous Markov chains are considered below. Let
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235025.png" /></td> </tr></table>
 
 
 
The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235026.png" /> with entries <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235027.png" /> is called the matrix of transition probabilities. The probability of an arbitrary trajectory <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235028.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235029.png" />, is given by the transition probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235030.png" /> and the initial distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235031.png" /> as follows:
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235032.png" /></td> </tr></table>
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235033.png" /></td> </tr></table>
 
 
 
Along with the transition probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235034.png" />, in a Markov chain the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235035.png" />-step transition probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235036.png" /> are also discussed:
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235037.png" /></td> </tr></table>
 
 
 
These transition probabilities satisfy the [[Kolmogorov–Chapman equation|Kolmogorov–Chapman equation]]
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235038.png" /></td> </tr></table>
 
 
 
Using transition probabilities it is possible to give the following classification of states. Two states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235039.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235040.png" /> are called communicating if there are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235041.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235042.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235043.png" />. A state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235044.png" /> is called inessential if there is a state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235045.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235046.png" /> for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235047.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235048.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235049.png" />. All remaining states are called essential. Thus, the complete state space of a Markov chain decomposes into inessential and essential states. The set of all essential states decomposes into disjoint classes of communicating states, such that any two states from one class communicate with each other, and for any two states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235050.png" /> from distinct classes, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235051.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235052.png" />. A Markov chain for which all states belong to one class of communicating states is called non-decomposable (cf. [[Markov chain, non-decomposable|Markov chain, non-decomposable]]); otherwise the Markov chain is called decomposable (see [[Markov chain, decomposable|Markov chain, decomposable]]). If the state space is finite, then its decomposition into these classes determines, to a great extent, the asymptotic properties of the Markov chain. For example, for a finite non-decomposable Markov chain the limit
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235053.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
 
 
 
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235054.png" />, always exists. If, in addition, the Markov chain is aperiodic, that is, if for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235055.png" />, for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235056.png" /> and all states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235057.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235058.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235059.png" /> (see also [[Markov chain, periodic|Markov chain, periodic]]), then there is the stronger result
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235060.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
 
 
 
(see also [[Markov chain, ergodic|Markov chain, ergodic]]).
 
 
 
If the state space of a Markov chain is countable, then its asymptotic properties depend on more subtle properties of the classes of communicating states. The series
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235061.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
 
 
 
simultaneously diverge or converge for all states of a given class. A class of states is called recurrent if for any state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235062.png" /> of the class the series (5) diverges and non-recurrent if (5) converges. In a recurrent class, with probability 1 a Markov chain returns to any of its states, in a non-recurrent class the probability of recurrence is less than 1. If the mean return time in a recurrent class is finite, then the class is called positive; otherwise it is called zero (see [[Markov chain, class of positive states of a|Markov chain, class of positive states of a]]; [[Markov chain, class of zero states of a|Markov chain, class of zero states of a]]). If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235063.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235064.png" /> belong to the same class of positive states, then the limit (3) exists, and in the aperiodic case the limit (4) exists. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235065.png" /> belongs to a class of zero states or is inessential, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235066.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235067.png" />.
 
 
 
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235068.png" /> be a real-valued function defined on the states of a Markov chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235069.png" />. If the Markov chain is non-decomposable and its states form a positive class, then for the sum
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235070.png" /></td> </tr></table>
 
 
 
the [[Central limit theorem|central limit theorem]] holds:
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235071.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
 
 
 
for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235072.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235073.png" />. For (6) to hold it is sufficient to require in addition that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235074.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235075.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235076.png" />.
 
 
 
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235077.png" /> takes any value in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235078.png" />, then the chain is called a continuous-time Markov chain, defined in a similar way using the Markov property (1). Usually, for a continuous-time Markov chain one additionally requires the existence of finite right derivatives <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235079.png" />, called the transition probability densities. For a finite continuous-time Markov chain, from the Kolmogorov–Chapman equation one obtains the Kolmogorov differential equations
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235080.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
 
 
 
and
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235081.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
 
 
 
with the initial conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235082.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235083.png" /> is the Kronecker symbol. Under additional assumptions (7) and (8) also hold for countable Markov chains.
 
 
 
If a continuous-time Markov chain has a [[Stationary distribution|stationary distribution]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235084.png" /> (that is, the distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235085.png" /> does not depend on the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235086.png" />), then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235087.png" /> satisfies the system of linear equations
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235088.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
 
 
 
Markov chains have been widely used in the solution of various applied problems. For example, in [[Queueing theory|queueing theory]], for the calculation of the distribution of the number of busy lines in an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235089.png" /> system with refusals (that is, in a system comprising <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235090.png" /> lines with Poisson flow of requests and exponential law of service time) a finite continuous-time Markov chain with states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235091.png" /> and the following transition probability densities is used:
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235092.png" /></td> </tr></table>
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235093.png" /></td> </tr></table>
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235094.png" /></td> </tr></table>
 
 
 
(here, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235095.png" /> is the intensity of the Poisson flow of requests and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235096.png" /> is the mean service time). Using (9) the stationary distribution of the number of busy lines in this case can be determined as:
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235097.png" /></td> </tr></table>
 
 
 
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235098.png" /></td> </tr></table>
 
 
 
which is called the [[Erlang distribution|Erlang distribution]].
 
 
 
See also [[Markov chain, generalized|Markov chain, generalized]]; [[Markov chain, recurrent|Markov chain, recurrent]]; [[Absorbing state|Absorbing state]]; [[Stochastic matrix|Stochastic matrix]]; [[Transition with prohibitions|Transition with prohibitions]].
 
 
 
====References====
 
{|
 
|valign="top"|{{Ref|M}}|| A.A. Markov, "?", ''Izv. Peterb. Akad. Nauk (6)'', '''1''' : 3 (1907) pp. 61–80
 
|-
 
|valign="top"|{{Ref|D}}|| J.L. Doob, "Stochastic processes", Wiley (1953) {{MR|1570654}} {{MR|0058896}} {{ZBL|0053.26802}}
 
|-
 
|valign="top"|{{Ref|C}}|| K.L. Chung, "Markov chains with stationary transition probabilities", Springer (1967) {{MR|0217872}} {{ZBL|0146.38401}}
 
|-
 
|valign="top"|{{Ref|F}}|| W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its  applications"]], '''1''', Wiley (1966)
 
|}
 
 
 
====Comments====
 
 
 
 
 
====References====
 
{|
 
|valign="top"|{{Ref|Fr}}|| D. Freedman, "Markov chains" , Holden-Day (1975) {{MR|0686269}} {{MR|0681291}} {{MR|0556418}} {{MR|0428472}} {{MR|0292176}} {{MR|0237001}} {{MR|0211464}} {{MR|0164375}} {{MR|0158435}} {{MR|0152015}} {{ZBL|0501.60071}} {{ZBL|0501.60069}} {{ZBL|0426.60064}} {{ZBL|0325.60059}} {{ZBL|0322.60057}} {{ZBL|0212.49801}} {{ZBL|0129.30605}}
 
|-
 
|valign="top"|{{Ref|I}}|| M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980) {{MR|0587116}} {{ZBL|0436.60001}}
 
|-
 
|valign="top"|{{Ref|KS}}|| J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960) {{MR|1531032}} {{MR|0115196}} {{ZBL|0089.13704}}
 
|-
 
|valign="top"|{{Ref|KSK}}|| J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976) {{MR|0407981}} {{ZBL|0348.60090}}
 
|-
 
|valign="top"|{{Ref|Re}}|| D. Revuz, "Markov chains" , North-Holland (1975) {{MR|0415773}} {{ZBL|0332.60045}}
 
|-
 
|valign="top"|{{Ref|Ro}}|| V.I. Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian) {{MR|0266312}} {{ZBL|0201.20002}}
 
|-
 
|valign="top"|{{Ref|S}}|| E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981) {{MR|2209438}} {{ZBL|0471.60001}}
 
|}
 

Revision as of 12:30, 13 March 2016

Redirect to:

How to Cite This Entry:
Markov chain. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_chain&oldid=26563
This article was adapted from an original article by B.A. Sevast'yanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article