# Difference between revisions of "Markov chain"

Jump to: navigation, search

2010 Mathematics Subject Classification: Primary: 60J10 Secondary: 60J27 [MSN][ZBL]

A Markov process with finite or countable state space. The theory of Markov chains was created by A.A. Markov who, in 1907, initiated the study of sequences of dependent trials and related sums of random variables [M].

Let the state space be the set of natural numbers $\mathbf N$ or a finite subset thereof. Let $\xi ( t)$ be the state of a Markov chain at time $t$. The fundamental property of a Markov chain is the Markov property, which for a discrete-time Markov chain (that is, when $t$ takes only non-negative integer values) is defined as follows: For any $i , j \in \mathbf N$, any non-negative integers $t _ {1} < \dots < t _ {k} < t$ and any natural numbers $i _ {1} \dots i _ {k}$, the equality

$$\tag{1 } {\mathsf P} \{ \xi ( t) = j \mid \xi ( t _ {1} ) = i _ {1} \dots \xi ( t _ {k} ) = i _ {k} \} =$$

$$= \ {\mathsf P} \{ \xi ( t) = j \mid \xi ( t _ {k} ) = i _ {k} \}$$

holds.

The Markov property (1) can be formulated as follows. The time $t$ and the related events of the form $\{ \xi ( t) = j \}$ are called the "present" of the process; events determined by the values $\xi ( u)$, $u < t$, are called the "past" of the process; events determined by $\xi ( u)$, $u > t$, are called the "future" of the process. Then (1) is equivalent to the following: For any $t \in \mathbf N$ and fixed "present" $\xi ( t) = j$, any "past" event $A$ and "future" event $B$ are conditionally independent, given the present. That is,

$${\mathsf P} \{ A \cap B \mid \xi ( t) = j \} =$$

$$= \ {\mathsf P} \{ A \mid \xi ( t) = j \} {\mathsf P} \{ B \mid \xi ( t) = j \} .$$

In the probabilistic description of a Markov chain $\xi ( t)$ the transition probabilities

$$\tag{2 } {\mathsf P} \{ \xi ( t + 1 ) = j \mid \xi ( t) = i \}$$

play a major role. When (2) does not depend on $t$, the Markov chain is called homogeneous (in time); otherwise it is called non-homogeneous. Only homogeneous Markov chains are considered below. Let

$$p _ {ij} = {\mathsf P} \{ \xi ( t + 1 ) = j \mid \xi ( t) = i \} .$$

The matrix ${\mathsf P} = \| p _ {ij} \|$ with entries $p _ {ij}$ is called the matrix of transition probabilities. The probability of an arbitrary trajectory $\xi ( k) = i _ {k}$, $k = 0 \dots t$, is given by the transition probabilities $p _ {ij}$ and the initial distribution ${\mathsf P} \{ \xi ( 0) = i \}$ as follows:

$${\mathsf P} \{ \xi ( k) = i _ {k} , k = 0 \dots t \} =$$

$$= \ {\mathsf P} \{ \xi ( 0) = i _ {0} \} \prod _ { k= } 1 ^ { t } p _ {i _ {k-} 1 i _ {k} } .$$

Along with the transition probabilities $p _ {ij}$, in a Markov chain the $t$- step transition probabilities $p _ {ij} ( t)$ are also discussed:

$$p _ {ij} ( t) = {\mathsf P} \{ \xi ( t _ {0} + t ) = j \mid \xi ( t _ {0} ) = i \} .$$

These transition probabilities satisfy the Kolmogorov–Chapman equation

$$p _ {ij} ( t _ {1} + t _ {2} ) = \ \sum _ { k } p _ {ik} ( t _ {1} ) p _ {kj} ( t _ {2} ) .$$

Using transition probabilities it is possible to give the following classification of states. Two states $i$ and $j$ are called communicating if there are $t _ {1} , t _ {2} > 0$ such that $p _ {ij} ( t _ {1} ) > 0$ and $p _ {ji} ( t _ {2} ) > 0$. A state $k$ is called inessential if there is a state $l$ such that $p _ {kl} ( t _ {1} ) > 0$ for some $t _ {1} \geq 1$ and $p _ {lk} ( t) \equiv 0$ for all $t \in \mathbf N$. All remaining states are called essential. Thus, the complete state space of a Markov chain decomposes into inessential and essential states. The set of all essential states decomposes into disjoint classes of communicating states, such that any two states from one class communicate with each other, and for any two states $i , j$ from distinct classes, $p _ {ij} ( t) \equiv 0$, $p _ {ji} ( t) \equiv 0$. A Markov chain for which all states belong to one class of communicating states is called non-decomposable (cf. Markov chain, non-decomposable); otherwise the Markov chain is called decomposable (see Markov chain, decomposable). If the state space is finite, then its decomposition into these classes determines, to a great extent, the asymptotic properties of the Markov chain. For example, for a finite non-decomposable Markov chain the limit

$$\tag{3 } \lim\limits _ {T \rightarrow \infty } \ \frac{1}{T + 1 } \sum _ { t= } 0 ^ { T } p _ {ij} ( t) = p _ {j} ,$$

where $\sum _ {j} p _ {j} = 1$, always exists. If, in addition, the Markov chain is aperiodic, that is, if for some $t _ {0}$, for all $t \geq t _ {0}$ and all states $i$ and $j$, $p _ {ij} ( t) > 0$( see also Markov chain, periodic), then there is the stronger result

$$\tag{4 } \lim\limits _ {t \rightarrow \infty } \ p _ {ij} ( t) = p _ {j}$$

(see also Markov chain, ergodic).

If the state space of a Markov chain is countable, then its asymptotic properties depend on more subtle properties of the classes of communicating states. The series

$$\tag{5 } \sum _ { t } p _ {ii} ( t)$$

simultaneously diverge or converge for all states of a given class. A class of states is called recurrent if for any state $i$ of the class the series (5) diverges and non-recurrent if (5) converges. In a recurrent class, with probability 1 a Markov chain returns to any of its states, in a non-recurrent class the probability of recurrence is less than 1. If the mean return time in a recurrent class is finite, then the class is called positive; otherwise it is called zero (see Markov chain, class of positive states of a; Markov chain, class of zero states of a). If $i$ and $j$ belong to the same class of positive states, then the limit (3) exists, and in the aperiodic case the limit (4) exists. If $j$ belongs to a class of zero states or is inessential, then $p _ {ij} ( t) \rightarrow 0$ as $t \rightarrow \infty$.

Let $f ( \cdot )$ be a real-valued function defined on the states of a Markov chain $\xi ( t)$. If the Markov chain is non-decomposable and its states form a positive class, then for the sum

$$\eta _ {t} = \ \sum _ {u = 0 } ^ { t } f ( \xi ( u) )$$

the central limit theorem holds:

$$\tag{6 } \lim\limits _ {t \rightarrow \infty } \ {\mathsf P} \left \{ \frac{\eta _ {t} - A t }{\sqrt Bt } < x \right \} = \frac{1}{\sqrt {2 \pi } } \int\limits _ {- \infty } ^ { x } e ^ {- u ^ {2} /2 } d u$$

for some $A$ and $B > 0$. For (6) to hold it is sufficient to require in addition that $\mathop{\rm Var} {\eta _ {t} } \sim B t$, $t \rightarrow \infty$, and $B > 0$.

If $t$ takes any value in $[ 0 , \infty )$, then the chain is called a continuous-time Markov chain, defined in a similar way using the Markov property (1). Usually, for a continuous-time Markov chain one additionally requires the existence of finite right derivatives $d p _ {ij} ( t) / d t \mid _ {t=} 0 = q _ {ij}$, called the transition probability densities. For a finite continuous-time Markov chain, from the Kolmogorov–Chapman equation one obtains the Kolmogorov differential equations

$$\tag{7 } \frac{d p _ {ij} ( t) }{d t } = \ \sum _ { k } p _ {ik} ( t) q _ {kj}$$

and

$$\tag{8 } \frac{d p _ {ij} ( t) }{d t } = \ \sum _ { k } q _ {ik} p _ {kj} ( t) ,$$

with the initial conditions $p _ {ij} ( 0) = \delta _ {ij}$, where $\delta _ {ij}$ is the Kronecker symbol. Under additional assumptions (7) and (8) also hold for countable Markov chains.

If a continuous-time Markov chain has a stationary distribution ${\mathsf P} \{ \xi ( t) = i \} = p _ {i}$( that is, the distribution of $\xi ( t)$ does not depend on the time $t$), then $\{ p _ {i} \}$ satisfies the system of linear equations

$$\tag{9 } \left . \begin{array}{cl} \sum p _ {i} = 1 , &{} \\ \sum _ { i } p _ {i} q _ {ij} = 0 , & j = 1 , 2 ,\dots . \\ \end{array} \right \}$$

Markov chains have been widely used in the solution of various applied problems. For example, in queueing theory, for the calculation of the distribution of the number of busy lines in an $M \mid M \mid n$ system with refusals (that is, in a system comprising $n$ lines with Poisson flow of requests and exponential law of service time) a finite continuous-time Markov chain with states $0 \dots n$ and the following transition probability densities is used:

$$q _ {i,i+} 1 = \lambda ,\ \ 0 \leq i < n ; \ q _ {i,i-} 1 = \ i \mu ,\ 1 \leq i \leq n ;$$

$$q _ {ii} = - ( \lambda + i \mu ) ,\ 0 \leq i \leq n ; \ q _ {nn} = - n \mu ;$$

$$q _ {ij} = 0 \ \textrm{ if } | i - j | > 1$$

(here, $\lambda$ is the intensity of the Poisson flow of requests and $\mu ^ {-} 1$ is the mean service time). Using (9) the stationary distribution of the number of busy lines in this case can be determined as:

$$p _ {i} = \ \frac{\left ( \frac \lambda \mu \right ) ^ {i} \frac{1}{i!} }{\sum _ { k= } 0 ^ { n } \left ( \frac \lambda \mu \right ) ^ {k} \frac{1}{k!} } ,\ \ i = 0 \dots n ,$$

$$p _ {i} = 0 ,\ i > n ,$$

which is called the Erlang distribution.

#### References

 [M] A.A. Markov, " Issledovanie zamechatel’nogo sluchaya zavisimyh ispytanij [Investigation of a remarkable example of dependent trials]", Izv. Peterb. Akad. Nauk (6), 1 : 3 (1907) pp. 61–80 (Russian) Zbl 38.0274.04 [D] J.L. Doob, "Stochastic processes", Wiley (1953) MR1570654 MR0058896 Zbl 0053.26802 [C] K.L. Chung, "Markov chains with stationary transition probabilities", Springer (1967) MR0217872 Zbl 0146.38401 [F] W. Feller, "An introduction to probability theory and its applications", 1, Wiley (1966)

#### References

 [Fr] D. Freedman, "Markov chains" , Holden-Day (1975) MR0686269 MR0681291 MR0556418 MR0428472 MR0292176 MR0237001 MR0211464 MR0164375 MR0158435 MR0152015 Zbl 0501.60071 Zbl 0501.60069 Zbl 0426.60064 Zbl 0325.60059 Zbl 0322.60057 Zbl 0212.49801 Zbl 0129.30605 [I] M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980) MR0587116 Zbl 0436.60001 [KS] J.G. Kemeny, J.L. Snell, "Finite Markov chains" , v. Nostrand (1960) MR1531032 MR0115196 Zbl 0089.13704 [KSK] J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976) MR0407981 Zbl 0348.60090 [Re] D. Revuz, "Markov chains" , North-Holland (1975) MR0415773 Zbl 0332.60045 [Ro] V.I. Romanovsky, "Discrete Markov chains" , Wolters-Noordhoff (1970) (Translated from Russian) MR0266312 Zbl 0201.20002 [S] E. Seneta, "Non-negative matrices and Markov chains" , Springer (1981) MR2209438 Zbl 0471.60001
How to Cite This Entry:
Markov chain. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_chain&oldid=25935
This article was adapted from an original article by B.A. Sevast'yanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article