Namespaces
Variants
Actions

Difference between revisions of "Markov chain"

From Encyclopedia of Mathematics
Jump to: navigation, search
(→‎References: Feller: internal link)
(latex details)
 
(8 intermediate revisions by 4 users not shown)
Line 1: Line 1:
 +
<!--
 +
m0623501.png
 +
$#A+1 = 98 n = 0
 +
$#C+1 = 98 : ~/encyclopedia/old_files/data/M062/M.0602350 Markov chain
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
{{MSC|60J10|60J27}}
 
{{MSC|60J10|60J27}}
  
 
[[Category:Markov processes]]
 
[[Category:Markov processes]]
  
A [[Markov process|Markov process]] with finite or countable state space. The theory of Markov chains was created by A.A. Markov who, in 1907, initiated the study of sequences of dependent trials and related sums of random variables [[#References|[1]]].
+
A [[Markov process|Markov process]] with finite or countable state space. The theory of Markov chains was created by [[Markov, Andrei Andreevich|A.A. Markov]] who, in 1907, initiated the study of sequences of dependent trials and related sums of random variables {{Cite|M}}.
  
Let the state space be the set of natural numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623501.png" /> or a finite subset thereof. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623502.png" /> be the state of a Markov chain at time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623503.png" />. The fundamental property of a Markov chain is the [[Markov property|Markov property]], which for a discrete-time Markov chain (that is, when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623504.png" /> takes only non-negative integer values) is defined as follows: For any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623505.png" />, any non-negative integers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623506.png" /> and any natural numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623507.png" />, the equality
+
Let the state space be the set of natural numbers $  \mathbf N $
 +
or a finite subset thereof. Let $  \xi ( t) $
 +
be the state of a Markov chain at time $  t $.  
 +
The fundamental property of a Markov chain is the [[Markov property|Markov property]], which for a discrete-time Markov chain (that is, when $  t $
 +
takes only non-negative integer values) is defined as follows: For any $  i , j \in \mathbf N $,  
 +
any non-negative integers $  t _ {1} < \dots < t _ {k} < t $
 +
and any natural numbers $  i _ {1} \dots i _ {k} $,  
 +
the equality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623508.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
{\mathsf P} \{ \xi ( t) = j \mid  \xi ( t _ {1} ) = i _ {1} \dots \xi ( t _ {k} ) = i _ {k} \} =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m0623509.png" /></td> </tr></table>
+
$$
 +
= \
 +
{\mathsf P} \{ \xi ( t) = j \mid  \xi ( t _ {k} ) = i _ {k} \}
 +
$$
  
 
holds.
 
holds.
  
The Markov property (1) can be formulated as follows. The time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235010.png" /> and the related events of the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235011.png" /> are called the "present" of the process; events determined by the values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235012.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235013.png" />, are called the "past" of the process; events determined by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235015.png" />, are called the "future" of the process. Then (1) is equivalent to the following: For any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235016.png" /> and fixed "present" <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235017.png" />, any "past" event <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235018.png" /> and "future" event <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235019.png" /> are conditionally independent, given the present. That is,
+
The Markov property (1) can be formulated as follows. The time $  t $
 +
and the related events of the form $  \{ \xi ( t) = j \} $
 +
are called the "present" of the process; events determined by the values $  \xi ( u) $,  
 +
$  u < t $,  
 +
are called the "past" of the process; events determined by $  \xi ( u) $,  
 +
$  u > t $,  
 +
are called the "future" of the process. Then (1) is equivalent to the following: For any $  t \in \mathbf N $
 +
and fixed "present" $  \xi ( t) = j $,  
 +
any "past" event $  A $
 +
and "future" event $  B $
 +
are conditionally independent, given the present. That is,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235020.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ A \cap B \mid  \xi ( t) = j \} =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235021.png" /></td> </tr></table>
+
$$
 +
= \
 +
{\mathsf P} \{ A \mid  \xi ( t) = j \}
 +
{\mathsf P} \{ B \mid  \xi ( t) = j \} .
 +
$$
  
In the probabilistic description of a Markov chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235022.png" /> the [[Transition probabilities|transition probabilities]]
+
In the probabilistic description of a Markov chain $  \xi ( t) $
 +
the [[Transition probabilities|transition probabilities]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235023.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
{\mathsf P} \{ \xi ( t + 1 ) = j \mid  \xi ( t) = i \}
 +
$$
  
play a major role. When (2) does not depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235024.png" />, the Markov chain is called homogeneous (in time); otherwise it is called non-homogeneous. Only homogeneous Markov chains are considered below. Let
+
play a major role. When (2) does not depend on $  t $,  
 +
the Markov chain is called homogeneous (in time); otherwise it is called non-homogeneous. Only homogeneous Markov chains are considered below. Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235025.png" /></td> </tr></table>
+
$$
 +
p _ {ij}  = {\mathsf P} \{ \xi ( t + 1 ) = j \mid  \xi ( t) = i \} .
 +
$$
  
The matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235026.png" /> with entries <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235027.png" /> is called the matrix of transition probabilities. The probability of an arbitrary trajectory <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235028.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235029.png" />, is given by the transition probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235030.png" /> and the initial distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235031.png" /> as follows:
+
The matrix $  {\mathsf P} = \| p _ {ij} \| $
 +
with entries $  p _ {ij} $
 +
is called the matrix of transition probabilities. The probability of an arbitrary trajectory $  \xi ( k) = i _ {k} $,  
 +
$  k = 0 \dots t $,  
 +
is given by the transition probabilities $  p _ {ij} $
 +
and the initial distribution $  {\mathsf P} \{ \xi ( 0) = i \} $
 +
as follows:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235032.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ \xi ( k) = i _ {k} , k = 0 \dots t \} =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235033.png" /></td> </tr></table>
+
$$
 +
= \
 +
{\mathsf P} \{ \xi ( 0) = i _ {0} \} \prod_{k=1}^ { t }  p _ {i _ {k-1} i _ {k}  } .
 +
$$
  
Along with the transition probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235034.png" />, in a Markov chain the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235035.png" />-step transition probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235036.png" /> are also discussed:
+
Along with the transition probabilities $  p _ {ij} $,  
 +
in a Markov chain the $  t $-
 +
step transition probabilities $  p _ {ij} ( t) $
 +
are also discussed:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235037.png" /></td> </tr></table>
+
$$
 +
p _ {ij} ( t)  = {\mathsf P}
 +
\{ \xi ( t _ {0} + t ) = j \mid  \xi ( t _ {0} ) = i \} .
 +
$$
  
 
These transition probabilities satisfy the [[Kolmogorov–Chapman equation|Kolmogorov–Chapman equation]]
 
These transition probabilities satisfy the [[Kolmogorov–Chapman equation|Kolmogorov–Chapman equation]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235038.png" /></td> </tr></table>
+
$$
 +
p _ {ij} ( t _ {1} + t _ {2} )  = \
 +
\sum _ { k } p _ {ik} ( t _ {1} ) p _ {kj} ( t _ {2} ) .
 +
$$
  
Using transition probabilities it is possible to give the following classification of states. Two states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235039.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235040.png" /> are called communicating if there are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235041.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235042.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235043.png" />. A state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235044.png" /> is called inessential if there is a state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235045.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235046.png" /> for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235047.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235048.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235049.png" />. All remaining states are called essential. Thus, the complete state space of a Markov chain decomposes into inessential and essential states. The set of all essential states decomposes into disjoint classes of communicating states, such that any two states from one class communicate with each other, and for any two states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235050.png" /> from distinct classes, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235051.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235052.png" />. A Markov chain for which all states belong to one class of communicating states is called non-decomposable (cf. [[Markov chain, non-decomposable|Markov chain, non-decomposable]]); otherwise the Markov chain is called decomposable (see [[Markov chain, decomposable|Markov chain, decomposable]]). If the state space is finite, then its decomposition into these classes determines, to a great extent, the asymptotic properties of the Markov chain. For example, for a finite non-decomposable Markov chain the limit
+
Using transition probabilities it is possible to give the following classification of states. Two states $  i $
 +
and $  j $
 +
are called communicating if there are $  t _ {1} , t _ {2} > 0 $
 +
such that $  p _ {ij} ( t _ {1} ) > 0 $
 +
and  $  p _ {ji} ( t _ {2} ) > 0 $.  
 +
A state $  k $
 +
is called inessential if there is a state $  l $
 +
such that $  p _ {kl} ( t _ {1} ) > 0 $
 +
for some $  t _ {1} \geq  1 $
 +
and $  p _ {lk} ( t) \equiv 0 $
 +
for all $  t \in \mathbf N $.  
 +
All remaining states are called essential. Thus, the complete state space of a Markov chain decomposes into inessential and essential states. The set of all essential states decomposes into disjoint classes of communicating states, such that any two states from one class communicate with each other, and for any two states $  i , j $
 +
from distinct classes, $  p _ {ij} ( t) \equiv 0 $,  
 +
$  p _ {ji} ( t) \equiv 0 $.  
 +
A Markov chain for which all states belong to one class of communicating states is called non-decomposable (cf. [[Markov chain, non-decomposable|Markov chain, non-decomposable]]); otherwise the Markov chain is called decomposable (see [[Markov chain, decomposable|Markov chain, decomposable]]). If the state space is finite, then its decomposition into these classes determines, to a great extent, the asymptotic properties of the Markov chain. For example, for a finite non-decomposable Markov chain the limit
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235053.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
\lim\limits _
 +
{T \rightarrow \infty } \
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235054.png" />, always exists. If, in addition, the Markov chain is aperiodic, that is, if for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235055.png" />, for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235056.png" /> and all states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235057.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235058.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235059.png" /> (see also [[Markov chain, periodic|Markov chain, periodic]]), then there is the stronger result
+
\frac{1}{T + 1 }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235060.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
\sum_{t=0}^ { T }
 +
p _ {ij} ( t)  = p _ {j} ,
 +
$$
 +
 
 +
where  $  \sum _ {j} p _ {j} = 1 $,
 +
always exists. If, in addition, the Markov chain is aperiodic, that is, if for some  $  t _ {0} $,
 +
for all  $  t \geq  t _ {0} $
 +
and all states  $  i $
 +
and  $  j $,
 +
$  p _ {ij} ( t) > 0 $(
 +
see also [[Markov chain, periodic|Markov chain, periodic]]), then there is the stronger result
 +
 
 +
$$ \tag{4 }
 +
\lim\limits _
 +
{t \rightarrow \infty } \
 +
p _ {ij} ( t) =  p _ {j}  $$
  
 
(see also [[Markov chain, ergodic|Markov chain, ergodic]]).
 
(see also [[Markov chain, ergodic|Markov chain, ergodic]]).
Line 53: Line 150:
 
If the state space of a Markov chain is countable, then its asymptotic properties depend on more subtle properties of the classes of communicating states. The series
 
If the state space of a Markov chain is countable, then its asymptotic properties depend on more subtle properties of the classes of communicating states. The series
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235061.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
\sum _ { t } p _ {ii} ( t)
 +
$$
  
simultaneously diverge or converge for all states of a given class. A class of states is called recurrent if for any state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235062.png" /> of the class the series (5) diverges and non-recurrent if (5) converges. In a recurrent class, with probability 1 a Markov chain returns to any of its states, in a non-recurrent class the probability of recurrence is less than 1. If the mean return time in a recurrent class is finite, then the class is called positive; otherwise it is called zero (see [[Markov chain, class of positive states of a|Markov chain, class of positive states of a]]; [[Markov chain, class of zero states of a|Markov chain, class of zero states of a]]). If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235063.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235064.png" /> belong to the same class of positive states, then the limit (3) exists, and in the aperiodic case the limit (4) exists. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235065.png" /> belongs to a class of zero states or is inessential, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235066.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235067.png" />.
+
simultaneously diverge or converge for all states of a given class. A class of states is called recurrent if for any state $  i $
 +
of the class the series (5) diverges and non-recurrent if (5) converges. In a recurrent class, with probability 1 a Markov chain returns to any of its states, in a non-recurrent class the probability of recurrence is less than 1. If the mean return time in a recurrent class is finite, then the class is called positive; otherwise it is called zero (see [[Markov chain, class of positive states of a|Markov chain, class of positive states of a]]; [[Markov chain, class of zero states of a|Markov chain, class of zero states of a]]). If $  i $
 +
and $  j $
 +
belong to the same class of positive states, then the limit (3) exists, and in the aperiodic case the limit (4) exists. If $  j $
 +
belongs to a class of zero states or is inessential, then $  p _ {ij} ( t) \rightarrow 0 $
 +
as $  t \rightarrow \infty $.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235068.png" /> be a real-valued function defined on the states of a Markov chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235069.png" />. If the Markov chain is non-decomposable and its states form a positive class, then for the sum
+
Let $  f ( \cdot ) $
 +
be a real-valued function defined on the states of a Markov chain $  \xi ( t) $.  
 +
If the Markov chain is non-decomposable and its states form a positive class, then for the sum
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235070.png" /></td> </tr></table>
+
$$
 +
\eta _ {t}  = \
 +
\sum _ {u = 0 } ^ { t }
 +
f ( \xi ( u) )
 +
$$
  
 
the [[Central limit theorem|central limit theorem]] holds:
 
the [[Central limit theorem|central limit theorem]] holds:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235071.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
\lim\limits _
 +
{t \rightarrow \infty } \
 +
{\mathsf P}
 +
\left \{
 +
 
 +
\frac{\eta _ {t} - A t }{\sqrt Bt }
 +
< x
 +
\right \}
 +
=
 +
\frac{1}{\sqrt {2 \pi } }
 +
 
 +
\int\limits _ {- \infty } ^ { x }
 +
e ^ {- u  ^ {2} /2 }  d u
 +
$$
  
for some <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235072.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235073.png" />. For (6) to hold it is sufficient to require in addition that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235074.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235075.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235076.png" />.
+
for some $  A $
 +
and  $  B > 0 $.  
 +
For (6) to hold it is sufficient to require in addition that $  \mathop{\rm Var}  {\eta _ {t} } \sim B t $,  
 +
$  t \rightarrow \infty $,  
 +
and $  B > 0 $.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235077.png" /> takes any value in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235078.png" />, then the chain is called a continuous-time Markov chain, defined in a similar way using the Markov property (1). Usually, for a continuous-time Markov chain one additionally requires the existence of finite right derivatives <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235079.png" />, called the transition probability densities. For a finite continuous-time Markov chain, from the Kolmogorov–Chapman equation one obtains the Kolmogorov differential equations
+
If $  t $
 +
takes any value in $  [ 0 , \infty ) $,
 +
then the chain is called a continuous-time Markov chain, defined in a similar way using the Markov property (1). Usually, for a continuous-time Markov chain one additionally requires the existence of finite right derivatives $  d p _ {ij} ( t) / d t \mid_{t=0}= q _ {ij} $,  
 +
called the transition probability densities. For a finite continuous-time Markov chain, from the Kolmogorov–Chapman equation one obtains the Kolmogorov differential equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235080.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$ \tag{7 }
 +
 
 +
\frac{d p _ {ij} ( t) }{d t }
 +
  = \
 +
\sum _ { k } p _ {ik} ( t) q _ {kj}  $$
  
 
and
 
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235081.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
$$ \tag{8 }
  
with the initial conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235082.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235083.png" /> is the Kronecker symbol. Under additional assumptions (7) and (8) also hold for countable Markov chains.
+
\frac{d p _ {ij} ( t) }{d t }
 +
  = \
 +
\sum _ { k } q _ {ik} p _ {kj} ( t) ,
 +
$$
  
If a continuous-time Markov chain has a [[Stationary distribution|stationary distribution]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235084.png" /> (that is, the distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235085.png" /> does not depend on the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235086.png" />), then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235087.png" /> satisfies the system of linear equations
+
with the initial conditions  $  p _ {ij} ( 0) = \delta _ {ij} $,
 +
where  $  \delta _ {ij} $
 +
is the Kronecker symbol. Under additional assumptions (7) and (8) also hold for countable Markov chains.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235088.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
If a continuous-time Markov chain has a [[Stationary distribution|stationary distribution]]  $  {\mathsf P} \{ \xi ( t) = i \} = p _ {i} $(
 +
that is, the distribution of  $  \xi ( t) $
 +
does not depend on the time  $  t $),
 +
then  $  \{ p _ {i} \} $
 +
satisfies the system of linear equations
  
Markov chains have been widely used in the solution of various applied problems. For example, in [[Queueing theory|queueing theory]], for the calculation of the distribution of the number of busy lines in an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235089.png" /> system with refusals (that is, in a system comprising <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235090.png" /> lines with Poisson flow of requests and exponential law of service time) a finite continuous-time Markov chain with states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235091.png" /> and the following transition probability densities is used:
+
$$ \tag{9 }
 +
\left .  
 +
\begin{array}{cl}
 +
\sum p _ {i}  =  1 , &{}  \\
 +
\sum _ { i } p _ {i} q _ {ij}  = 0 ,  & j = 1 , 2 ,\dots . \\
 +
\end{array}
 +
\right \}
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235092.png" /></td> </tr></table>
+
Markov chains have been widely used in the solution of various applied problems. For example, in [[queueing theory]], for the calculation of the distribution of the number of busy lines in an  $  M \mid  M \mid  n $
 +
system with refusals (that is, in a system comprising  $  n $
 +
lines with Poisson flow of requests and exponential law of service time) a finite continuous-time Markov chain with states  $  0 \dots n $
 +
and the following transition probability densities is used:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235093.png" /></td> </tr></table>
+
$$
 +
q _ {i,i+1}  = \lambda ,\ \
 +
0 \leq  i < n ; \  q _ {i,i-1}  = \
 +
i \mu ,\  1 \leq  i \leq  n ;
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235094.png" /></td> </tr></table>
+
$$
 +
q _ {ii}  = - ( \lambda + i \mu ) ,\  0 \leq  i \leq  n ; \  q _ {nn}  = - n \mu ;
 +
$$
  
(here, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235095.png" /> is the intensity of the Poisson flow of requests and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235096.png" /> is the mean service time). Using (9) the stationary distribution of the number of busy lines in this case can be determined as:
+
$$
 +
q _ {ij}  = 0 \  \textrm{ if }  | i - j | > 1
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235097.png" /></td> </tr></table>
+
(here,  $  \lambda $
 +
is the intensity of the Poisson flow of requests and  $  \mu  ^ {-1} $
 +
is the mean service time). Using (9) the stationary distribution of the number of busy lines in this case can be determined as:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062350/m06235098.png" /></td> </tr></table>
+
$$
 +
p _ {i}  = \
 +
 
 +
\frac{\left (
 +
\frac \lambda  \mu
 +
\right )  ^ {i}
 +
\frac{1}{i!}
 +
}{\sum_{k=0}^ { n }
 +
\left (
 +
\frac \lambda  \mu
 +
\right )  ^ {k}
 +
\frac{1}{k!}
 +
}
 +
,\ \
 +
i = 0 \dots n ,
 +
$$
 +
 
 +
$$
 +
p _ {i}  = 0 ,\  i > n ,
 +
$$
  
 
which is called the [[Erlang distribution|Erlang distribution]].
 
which is called the [[Erlang distribution|Erlang distribution]].
Line 100: Line 283:
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> A.A. Markov, ''Izv. Peterb. Akad. Nauk (6)'', '''1''' : 3 (1907) pp. 61–80</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> J.L. Doob, "Stochastic processes", Wiley (1953) {{MR|1570654}} {{MR|0058896}} {{ZBL|0053.26802}} </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> K.L. Chung, "Markov chains with stationary transition probabilities", Springer (1967) {{MR|0217872}} {{ZBL|0146.38401}} </TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its  applications"]], '''1''', Wiley (1966) </TD></TR></table>
+
{|
 +
|valign="top"|{{Ref|M}}|| A.A. Markov, " Issledovanie zamechatel’nogo sluchaya zavisimyh ispytanij [Investigation of a remarkable example of dependent trials]", ''Izv. Peterb. Akad. Nauk (6)'', '''1''' : 3 (1907) pp. 61–80 (Russian) {{ZBL|38.0274.04}}
 +
|-
 +
|valign="top"|{{Ref|D}}|| J.L. Doob, "Stochastic processes", Wiley (1953) {{MR|1570654}} {{MR|0058896}} {{ZBL|0053.26802}}
 +
|-
 +
|valign="top"|{{Ref|C}}|| K.L. Chung, "Markov chains with stationary transition probabilities", Springer (1967) {{MR|0217872}} {{ZBL|0146.38401}}
 +
|-
 +
|valign="top"|{{Ref|F}}|| W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its  applications"]], '''1''', Wiley (1966)
 +
|}
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> D. Freedman, "Markov chains" , Holden-Day (1975) {{MR|0686269}} {{MR|0681291}} {{MR|0556418}} {{MR|0428472}} {{MR|0292176}} {{MR|0237001}} {{MR|0211464}} {{MR|0164375}} {{MR|0158435}} {{MR|0152015}} {{ZBL|0501.60071}} {{ZBL|0501.60069}} {{ZBL|0426.60064}} {{ZBL|0325.60059}} {{ZBL|0322.60057}} {{ZBL|0212.49801}} {{ZBL|0129.30605}} </TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top"> M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980) {{MR|0587116}} {{ZBL|0436.60001}} </TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top"> J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960) {{MR|1531032}} {{MR|0115196}} {{ZBL|0089.13704}} </TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top"> J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976) {{MR|0407981}} {{ZBL|0348.60090}} </TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top"> D. Revuz, "Markov chains" , North-Holland (1975) {{MR|0415773}} {{ZBL|0332.60045}} </TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top"> V.I. [V.I. Romanovskii] Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian) {{MR|0266312}} {{ZBL|0201.20002}} </TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top"> E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981) {{MR|2209438}} {{ZBL|0471.60001}} </TD></TR></table>
+
{|
 +
|valign="top"|{{Ref|Fr}}|| D. Freedman, "Markov chains" , Holden-Day (1975) {{MR|0686269}} {{MR|0681291}} {{MR|0556418}} {{MR|0428472}} {{MR|0292176}} {{MR|0237001}} {{MR|0211464}} {{MR|0164375}} {{MR|0158435}} {{MR|0152015}} {{ZBL|0501.60071}} {{ZBL|0501.60069}} {{ZBL|0426.60064}} {{ZBL|0325.60059}} {{ZBL|0322.60057}} {{ZBL|0212.49801}} {{ZBL|0129.30605}}
 +
|-
 +
|valign="top"|{{Ref|I}}|| M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980) {{MR|0587116}} {{ZBL|0436.60001}}
 +
|-
 +
|valign="top"|{{Ref|KS}}|| J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960) {{MR|1531032}} {{MR|0115196}} {{ZBL|0089.13704}}
 +
|-
 +
|valign="top"|{{Ref|KSK}}|| J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976) {{MR|0407981}} {{ZBL|0348.60090}}
 +
|-
 +
|valign="top"|{{Ref|Re}}|| D. Revuz, "Markov chains" , North-Holland (1975) {{MR|0415773}} {{ZBL|0332.60045}}
 +
|-
 +
|valign="top"|{{Ref|Ro}}|| V.I. Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian) {{MR|0266312}} {{ZBL|0201.20002}}
 +
|-
 +
|valign="top"|{{Ref|S}}|| E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981) {{MR|2209438}} {{ZBL|0471.60001}}
 +
|}

Latest revision as of 08:04, 14 January 2024


2020 Mathematics Subject Classification: Primary: 60J10 Secondary: 60J27 [MSN][ZBL]

A Markov process with finite or countable state space. The theory of Markov chains was created by A.A. Markov who, in 1907, initiated the study of sequences of dependent trials and related sums of random variables [M].

Let the state space be the set of natural numbers $ \mathbf N $ or a finite subset thereof. Let $ \xi ( t) $ be the state of a Markov chain at time $ t $. The fundamental property of a Markov chain is the Markov property, which for a discrete-time Markov chain (that is, when $ t $ takes only non-negative integer values) is defined as follows: For any $ i , j \in \mathbf N $, any non-negative integers $ t _ {1} < \dots < t _ {k} < t $ and any natural numbers $ i _ {1} \dots i _ {k} $, the equality

$$ \tag{1 } {\mathsf P} \{ \xi ( t) = j \mid \xi ( t _ {1} ) = i _ {1} \dots \xi ( t _ {k} ) = i _ {k} \} = $$

$$ = \ {\mathsf P} \{ \xi ( t) = j \mid \xi ( t _ {k} ) = i _ {k} \} $$

holds.

The Markov property (1) can be formulated as follows. The time $ t $ and the related events of the form $ \{ \xi ( t) = j \} $ are called the "present" of the process; events determined by the values $ \xi ( u) $, $ u < t $, are called the "past" of the process; events determined by $ \xi ( u) $, $ u > t $, are called the "future" of the process. Then (1) is equivalent to the following: For any $ t \in \mathbf N $ and fixed "present" $ \xi ( t) = j $, any "past" event $ A $ and "future" event $ B $ are conditionally independent, given the present. That is,

$$ {\mathsf P} \{ A \cap B \mid \xi ( t) = j \} = $$

$$ = \ {\mathsf P} \{ A \mid \xi ( t) = j \} {\mathsf P} \{ B \mid \xi ( t) = j \} . $$

In the probabilistic description of a Markov chain $ \xi ( t) $ the transition probabilities

$$ \tag{2 } {\mathsf P} \{ \xi ( t + 1 ) = j \mid \xi ( t) = i \} $$

play a major role. When (2) does not depend on $ t $, the Markov chain is called homogeneous (in time); otherwise it is called non-homogeneous. Only homogeneous Markov chains are considered below. Let

$$ p _ {ij} = {\mathsf P} \{ \xi ( t + 1 ) = j \mid \xi ( t) = i \} . $$

The matrix $ {\mathsf P} = \| p _ {ij} \| $ with entries $ p _ {ij} $ is called the matrix of transition probabilities. The probability of an arbitrary trajectory $ \xi ( k) = i _ {k} $, $ k = 0 \dots t $, is given by the transition probabilities $ p _ {ij} $ and the initial distribution $ {\mathsf P} \{ \xi ( 0) = i \} $ as follows:

$$ {\mathsf P} \{ \xi ( k) = i _ {k} , k = 0 \dots t \} = $$

$$ = \ {\mathsf P} \{ \xi ( 0) = i _ {0} \} \prod_{k=1}^ { t } p _ {i _ {k-1} i _ {k} } . $$

Along with the transition probabilities $ p _ {ij} $, in a Markov chain the $ t $- step transition probabilities $ p _ {ij} ( t) $ are also discussed:

$$ p _ {ij} ( t) = {\mathsf P} \{ \xi ( t _ {0} + t ) = j \mid \xi ( t _ {0} ) = i \} . $$

These transition probabilities satisfy the Kolmogorov–Chapman equation

$$ p _ {ij} ( t _ {1} + t _ {2} ) = \ \sum _ { k } p _ {ik} ( t _ {1} ) p _ {kj} ( t _ {2} ) . $$

Using transition probabilities it is possible to give the following classification of states. Two states $ i $ and $ j $ are called communicating if there are $ t _ {1} , t _ {2} > 0 $ such that $ p _ {ij} ( t _ {1} ) > 0 $ and $ p _ {ji} ( t _ {2} ) > 0 $. A state $ k $ is called inessential if there is a state $ l $ such that $ p _ {kl} ( t _ {1} ) > 0 $ for some $ t _ {1} \geq 1 $ and $ p _ {lk} ( t) \equiv 0 $ for all $ t \in \mathbf N $. All remaining states are called essential. Thus, the complete state space of a Markov chain decomposes into inessential and essential states. The set of all essential states decomposes into disjoint classes of communicating states, such that any two states from one class communicate with each other, and for any two states $ i , j $ from distinct classes, $ p _ {ij} ( t) \equiv 0 $, $ p _ {ji} ( t) \equiv 0 $. A Markov chain for which all states belong to one class of communicating states is called non-decomposable (cf. Markov chain, non-decomposable); otherwise the Markov chain is called decomposable (see Markov chain, decomposable). If the state space is finite, then its decomposition into these classes determines, to a great extent, the asymptotic properties of the Markov chain. For example, for a finite non-decomposable Markov chain the limit

$$ \tag{3 } \lim\limits _ {T \rightarrow \infty } \ \frac{1}{T + 1 } \sum_{t=0}^ { T } p _ {ij} ( t) = p _ {j} , $$

where $ \sum _ {j} p _ {j} = 1 $, always exists. If, in addition, the Markov chain is aperiodic, that is, if for some $ t _ {0} $, for all $ t \geq t _ {0} $ and all states $ i $ and $ j $, $ p _ {ij} ( t) > 0 $( see also Markov chain, periodic), then there is the stronger result

$$ \tag{4 } \lim\limits _ {t \rightarrow \infty } \ p _ {ij} ( t) = p _ {j} $$

(see also Markov chain, ergodic).

If the state space of a Markov chain is countable, then its asymptotic properties depend on more subtle properties of the classes of communicating states. The series

$$ \tag{5 } \sum _ { t } p _ {ii} ( t) $$

simultaneously diverge or converge for all states of a given class. A class of states is called recurrent if for any state $ i $ of the class the series (5) diverges and non-recurrent if (5) converges. In a recurrent class, with probability 1 a Markov chain returns to any of its states, in a non-recurrent class the probability of recurrence is less than 1. If the mean return time in a recurrent class is finite, then the class is called positive; otherwise it is called zero (see Markov chain, class of positive states of a; Markov chain, class of zero states of a). If $ i $ and $ j $ belong to the same class of positive states, then the limit (3) exists, and in the aperiodic case the limit (4) exists. If $ j $ belongs to a class of zero states or is inessential, then $ p _ {ij} ( t) \rightarrow 0 $ as $ t \rightarrow \infty $.

Let $ f ( \cdot ) $ be a real-valued function defined on the states of a Markov chain $ \xi ( t) $. If the Markov chain is non-decomposable and its states form a positive class, then for the sum

$$ \eta _ {t} = \ \sum _ {u = 0 } ^ { t } f ( \xi ( u) ) $$

the central limit theorem holds:

$$ \tag{6 } \lim\limits _ {t \rightarrow \infty } \ {\mathsf P} \left \{ \frac{\eta _ {t} - A t }{\sqrt Bt } < x \right \} = \frac{1}{\sqrt {2 \pi } } \int\limits _ {- \infty } ^ { x } e ^ {- u ^ {2} /2 } d u $$

for some $ A $ and $ B > 0 $. For (6) to hold it is sufficient to require in addition that $ \mathop{\rm Var} {\eta _ {t} } \sim B t $, $ t \rightarrow \infty $, and $ B > 0 $.

If $ t $ takes any value in $ [ 0 , \infty ) $, then the chain is called a continuous-time Markov chain, defined in a similar way using the Markov property (1). Usually, for a continuous-time Markov chain one additionally requires the existence of finite right derivatives $ d p _ {ij} ( t) / d t \mid_{t=0}= q _ {ij} $, called the transition probability densities. For a finite continuous-time Markov chain, from the Kolmogorov–Chapman equation one obtains the Kolmogorov differential equations

$$ \tag{7 } \frac{d p _ {ij} ( t) }{d t } = \ \sum _ { k } p _ {ik} ( t) q _ {kj} $$

and

$$ \tag{8 } \frac{d p _ {ij} ( t) }{d t } = \ \sum _ { k } q _ {ik} p _ {kj} ( t) , $$

with the initial conditions $ p _ {ij} ( 0) = \delta _ {ij} $, where $ \delta _ {ij} $ is the Kronecker symbol. Under additional assumptions (7) and (8) also hold for countable Markov chains.

If a continuous-time Markov chain has a stationary distribution $ {\mathsf P} \{ \xi ( t) = i \} = p _ {i} $( that is, the distribution of $ \xi ( t) $ does not depend on the time $ t $), then $ \{ p _ {i} \} $ satisfies the system of linear equations

$$ \tag{9 } \left . \begin{array}{cl} \sum p _ {i} = 1 , &{} \\ \sum _ { i } p _ {i} q _ {ij} = 0 , & j = 1 , 2 ,\dots . \\ \end{array} \right \} $$

Markov chains have been widely used in the solution of various applied problems. For example, in queueing theory, for the calculation of the distribution of the number of busy lines in an $ M \mid M \mid n $ system with refusals (that is, in a system comprising $ n $ lines with Poisson flow of requests and exponential law of service time) a finite continuous-time Markov chain with states $ 0 \dots n $ and the following transition probability densities is used:

$$ q _ {i,i+1} = \lambda ,\ \ 0 \leq i < n ; \ q _ {i,i-1} = \ i \mu ,\ 1 \leq i \leq n ; $$

$$ q _ {ii} = - ( \lambda + i \mu ) ,\ 0 \leq i \leq n ; \ q _ {nn} = - n \mu ; $$

$$ q _ {ij} = 0 \ \textrm{ if } | i - j | > 1 $$

(here, $ \lambda $ is the intensity of the Poisson flow of requests and $ \mu ^ {-1} $ is the mean service time). Using (9) the stationary distribution of the number of busy lines in this case can be determined as:

$$ p _ {i} = \ \frac{\left ( \frac \lambda \mu \right ) ^ {i} \frac{1}{i!} }{\sum_{k=0}^ { n } \left ( \frac \lambda \mu \right ) ^ {k} \frac{1}{k!} } ,\ \ i = 0 \dots n , $$

$$ p _ {i} = 0 ,\ i > n , $$

which is called the Erlang distribution.

See also Markov chain, generalized; Markov chain, recurrent; Absorbing state; Stochastic matrix; Transition with prohibitions.

References

[M] A.A. Markov, " Issledovanie zamechatel’nogo sluchaya zavisimyh ispytanij [Investigation of a remarkable example of dependent trials]", Izv. Peterb. Akad. Nauk (6), 1 : 3 (1907) pp. 61–80 (Russian) Zbl 38.0274.04
[D] J.L. Doob, "Stochastic processes", Wiley (1953) MR1570654 MR0058896 Zbl 0053.26802
[C] K.L. Chung, "Markov chains with stationary transition probabilities", Springer (1967) MR0217872 Zbl 0146.38401
[F] W. Feller, "An introduction to probability theory and its applications", 1, Wiley (1966)

Comments

References

[Fr] D. Freedman, "Markov chains" , Holden-Day (1975) MR0686269 MR0681291 MR0556418 MR0428472 MR0292176 MR0237001 MR0211464 MR0164375 MR0158435 MR0152015 Zbl 0501.60071 Zbl 0501.60069 Zbl 0426.60064 Zbl 0325.60059 Zbl 0322.60057 Zbl 0212.49801 Zbl 0129.30605
[I] M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980) MR0587116 Zbl 0436.60001
[KS] J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960) MR1531032 MR0115196 Zbl 0089.13704
[KSK] J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976) MR0407981 Zbl 0348.60090
[Re] D. Revuz, "Markov chains" , North-Holland (1975) MR0415773 Zbl 0332.60045
[Ro] V.I. Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian) MR0266312 Zbl 0201.20002
[S] E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981) MR2209438 Zbl 0471.60001
How to Cite This Entry:
Markov chain. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_chain&oldid=25935
This article was adapted from an original article by B.A. Sevast'yanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article