Namespaces
Variants
Actions

Difference between revisions of "Transition probabilities"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (fixing subscripts)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
The probabilities of transition of a [[Markov chain|Markov chain]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t0937801.png" /> from a state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t0937802.png" /> into a state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t0937803.png" /> in a time interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t0937804.png" />:
+
<!--
 +
t0937801.png
 +
$#A+1 = 56 n = 0
 +
$#C+1 = 56 : ~/encyclopedia/old_files/data/T093/T.0903780 Transition probabilities
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t0937805.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
In view of the basic property of a Markov chain, for any states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t0937806.png" /> (where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t0937807.png" /> is the set of all states of the chain) and any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t0937808.png" />,
+
The probabilities of transition of a [[Markov chain|Markov chain]]  $  \xi ( t) $
 +
from a state    i
 +
into a state    j
 +
in a time interval    [ s, t] :
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t0937809.png" /></td> </tr></table>
+
$$
 +
p _ {ij} ( s, t)  = {\mathsf P} \{ \xi ( t) = j \mid  \xi ( s) = i \} ,\  s< t.
 +
$$
  
One usually considers homogeneous Markov chains, for which the transition probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378010.png" /> depend on the length of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378011.png" /> but not on its position on the time axis:
+
In view of the basic property of a Markov chain, for any states    i, j \in S (where    S
 +
is the set of all states of the chain) and any  $  s < t < u $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378012.png" /></td> </tr></table>
+
$$
 +
p _ {ij} ( s, u)  = \sum _ {k \in S } p _ {ik} ( s, t) p _ {kj} ( t, u).
 +
$$
  
For any states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378013.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378014.png" /> of a homogeneous Markov chain with discrete time, the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378015.png" /> has a Cesàro limit, i.e.
+
One usually considers homogeneous Markov chains, for which the transition probabilities  $  p _ {ij} ( s, t) $
 +
depend on the length of  $  [ s, t] $
 +
but not on its position on the time axis:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378016.png" /></td> </tr></table>
+
$$
 +
p _ {ij} ( s, t)  = p _ {ij} ( t- s).
 +
$$
 +
 
 +
For any states    i
 +
and    j
 +
of a homogeneous Markov chain with discrete time, the sequence    p _ {ij} ( n)
 +
has a Cesàro limit, i.e.
 +
 
 +
$$
 +
\lim\limits _ {n \rightarrow \infty } 
 +
\frac{1}{n}
 +
\sum _ { k= 1} ^ { n }  p _ {ij} ( k)  \geq  0.
 +
$$
  
 
Subject to certain additional conditions (and also for chains with continuous time), the limit exists also in the usual sense. See [[Markov chain, ergodic|Markov chain, ergodic]]; [[Markov chain, class of positive states of a|Markov chain, class of positive states of a]].
 
Subject to certain additional conditions (and also for chains with continuous time), the limit exists also in the usual sense. See [[Markov chain, ergodic|Markov chain, ergodic]]; [[Markov chain, class of positive states of a|Markov chain, class of positive states of a]].
  
The transition probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378017.png" /> for a Markov chain with discrete time are determined by the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378018.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378019.png" />; for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378020.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378021.png" />,
+
The transition probabilities $  p _ {ij} ( t) $
 +
for a Markov chain with discrete time are determined by the values of   p _ {ij} ( 1) ,
 +
$  i, j \in S $;  
 +
for any t > 0 $,  
 +
  i \in S ,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378022.png" /></td> </tr></table>
+
$$
 +
\sum _ {j \in S } p _ {ij} ( t)  = 1.
 +
$$
  
In the case of Markov chains with continuous time it is usually assumed that the transition probabilities satisfy the following additional conditions: All the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378023.png" /> are measurable as functions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378024.png" />,
+
In the case of Markov chains with continuous time it is usually assumed that the transition probabilities satisfy the following additional conditions: All the $  p _ {ij} ( t) $
 +
are measurable as functions of $  t \in ( 0, \infty ) $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378025.png" /></td> </tr></table>
+
$$
 +
\lim\limits _ {t \downarrow 0 }  p _ {ij} ( t)  = 0 \ \
 +
( i \neq j),\ \
 +
\lim\limits _ {t \downarrow 0 }  p _ {ii} ( t)  = 1 ,\ \
 +
i, j \in S.
 +
$$
  
 
Under these assumptions the following transition rates exist:
 
Under these assumptions the following transition rates exist:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378026.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
\lambda _ {ij}  = \lim\limits _ {t \downarrow 0
 +
\frac{1}{t}
 +
( p _ {ij} ( t) - p _ {ij} ( 0))
 +
\leq  \infty ,\ \
 +
i, j \in S;
 +
$$
  
if all the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378027.png" /> are finite and if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378028.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378029.png" />, then the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378030.png" /> satisfy the Kolmogorov–Chapman system of differential equations
+
if all the   \lambda _ {ij}
 +
are finite and if $  \sum _ {j \in S }  \lambda _ {ij} = 0 $,  
 +
  i \in S ,  
 +
then the $  p _ {ij} ( t) $
 +
satisfy the Kolmogorov–Chapman system of differential equations
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378031.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
p _ {ij}  ^  \prime  ( t)  = \sum _ {k \in S } \lambda _ {ik} p _ {kj} ( t),\ \
 +
p _ {ij}  ^  \prime  ( t= \sum _ {k \in S } \lambda _ {kj} p _ {ik} ( t)
 +
$$
  
with the initial conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378032.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378033.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378034.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378035.png" /> (see also [[Kolmogorov equation|Kolmogorov equation]]; [[Kolmogorov–Chapman equation|Kolmogorov–Chapman equation]]).
+
with the initial conditions $  p _ {ii} ( 0) = 1 $,  
 +
$  p _ {ij} ( 0) = 0 $,
 +
  i \neq j ,  
 +
$  i, j \in S $ (see also [[Kolmogorov equation|Kolmogorov equation]]; [[Kolmogorov–Chapman equation|Kolmogorov–Chapman equation]]).
  
If a Markov chain is specified by means of the transition rates (1), then the transition probabilities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378036.png" /> satisfy the conditions
+
If a Markov chain is specified by means of the transition rates (1), then the transition probabilities $  p _ {ij} ( t) $
 +
satisfy the conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378037.png" /></td> </tr></table>
+
$$
 +
p _ {ij} ( t)  \geq  0,\ \
 +
\sum _ {j \in S } p _ {ij} ( t)  \leq  1,\ \
 +
i, j \in S,\ \
 +
t > 0;
 +
$$
  
chains for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378038.png" /> for certain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378039.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378040.png" /> are called defective (in this case the solution to (2) is not unique); if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378041.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378042.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378043.png" />, the chain is called proper.
+
chains for which $  \sum _ {j \in S }  p _ {ij} ( t) < 1 $
 +
for certain   i \in S
 +
and  $  t > 0 $
 +
are called defective (in this case the solution to (2) is not unique); if $  \sum _ {j \in S }  p _ {ij} ( t) = 1 $
 +
for all   i \in S
 +
and  $  t > 0 $,  
 +
the chain is called proper.
  
Example. The Markov chain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378044.png" /> with set of states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378045.png" /> and transition densities
+
Example. The Markov chain $  \xi ( t) $
 +
with set of states $  \{ 0, 1 ,\dots \} $
 +
and transition densities
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378046.png" /></td> </tr></table>
+
$$
 +
\lambda _ {i,i+1}  = - \lambda _ {ii}  = \lambda _ {i}  > 0,\ \
 +
\lambda _ {ij}  = 0 \ \
 +
( i \neq j \neq i+ 1)
 +
$$
  
 
(i.e., a pure birth process) is defective if and only if
 
(i.e., a pure birth process) is defective if and only if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378047.png" /></td> </tr></table>
+
$$
 +
\sum _ { i= 0} ^  \infty 
 +
\frac{1}{\lambda _ {i} }
 +
  < \infty .
 +
$$
  
 
Let
 
Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378048.png" /></td> </tr></table>
+
$$
 +
\tau _ {0n}  = \inf  \{ {t > 0 } : {\xi ( t) = n  ( \xi ( 0) = 0) } \}
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378049.png" /></td> </tr></table>
+
$$
 +
\tau  = \lim\limits _ {n \rightarrow \infty }  \tau _ {0n} ;
 +
$$
  
 
then
 
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378050.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} \tau  = \sum _ { i= 1} ^  \infty 
 +
\frac{1}{\lambda _ {i} }
  
and for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378051.png" /> one has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378052.png" />, i.e. the path of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378053.png" />  "tends to infinity in a finite time with probability 1"  (see also [[Branching processes, regularity of|Branching processes, regularity of]]).
+
$$
 +
 
 +
and for $  {\mathsf E} \tau < \infty $
 +
one has $  {\mathsf P} \{ \tau < \infty \} = 1 $,  
 +
i.e. the path of $  \xi ( t) $"
 +
tends to infinity in a finite time with probability 1"  (see also [[Branching processes, regularity of|Branching processes, regularity of]]).
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  K.L. Chung,  "Markov chains with stationary probability densities" , Springer  (1967)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  K.L. Chung,  "Markov chains with stationary probability densities" , Springer  (1967)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
For additional references see also [[Markov chain|Markov chain]]; [[Markov process|Markov process]].
 
For additional references see also [[Markov chain|Markov chain]]; [[Markov process|Markov process]].
  
In (1), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378054.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378055.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/t/t093/t093780/t09378056.png" />.
+
In (1), $  \lambda _ {ij} \geq  0 $
 +
if   i \neq j
 +
and $  \lambda _ {ii} \leq  0 $.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  M. Iosifescu,  "Finite Markov processes and their applications" , Wiley  (1980)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  D. Revuz,  "Markov chains" , North-Holland  (1984)</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  M. Iosifescu,  "Finite Markov processes and their applications" , Wiley  (1980)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  D. Revuz,  "Markov chains" , North-Holland  (1984)</TD></TR></table>

Latest revision as of 02:57, 4 March 2022


The probabilities of transition of a Markov chain \xi ( t) from a state i into a state j in a time interval [ s, t] :

p _ {ij} ( s, t) = {\mathsf P} \{ \xi ( t) = j \mid \xi ( s) = i \} ,\ s< t.

In view of the basic property of a Markov chain, for any states i, j \in S (where S is the set of all states of the chain) and any s < t < u ,

p _ {ij} ( s, u) = \sum _ {k \in S } p _ {ik} ( s, t) p _ {kj} ( t, u).

One usually considers homogeneous Markov chains, for which the transition probabilities p _ {ij} ( s, t) depend on the length of [ s, t] but not on its position on the time axis:

p _ {ij} ( s, t) = p _ {ij} ( t- s).

For any states i and j of a homogeneous Markov chain with discrete time, the sequence p _ {ij} ( n) has a Cesàro limit, i.e.

\lim\limits _ {n \rightarrow \infty } \frac{1}{n} \sum _ { k= 1} ^ { n } p _ {ij} ( k) \geq 0.

Subject to certain additional conditions (and also for chains with continuous time), the limit exists also in the usual sense. See Markov chain, ergodic; Markov chain, class of positive states of a.

The transition probabilities p _ {ij} ( t) for a Markov chain with discrete time are determined by the values of p _ {ij} ( 1) , i, j \in S ; for any t > 0 , i \in S ,

\sum _ {j \in S } p _ {ij} ( t) = 1.

In the case of Markov chains with continuous time it is usually assumed that the transition probabilities satisfy the following additional conditions: All the p _ {ij} ( t) are measurable as functions of t \in ( 0, \infty ) ,

\lim\limits _ {t \downarrow 0 } p _ {ij} ( t) = 0 \ \ ( i \neq j),\ \ \lim\limits _ {t \downarrow 0 } p _ {ii} ( t) = 1 ,\ \ i, j \in S.

Under these assumptions the following transition rates exist:

\tag{1 } \lambda _ {ij} = \lim\limits _ {t \downarrow 0 } \frac{1}{t} ( p _ {ij} ( t) - p _ {ij} ( 0)) \leq \infty ,\ \ i, j \in S;

if all the \lambda _ {ij} are finite and if \sum _ {j \in S } \lambda _ {ij} = 0 , i \in S , then the p _ {ij} ( t) satisfy the Kolmogorov–Chapman system of differential equations

\tag{2 } p _ {ij} ^ \prime ( t) = \sum _ {k \in S } \lambda _ {ik} p _ {kj} ( t),\ \ p _ {ij} ^ \prime ( t) = \sum _ {k \in S } \lambda _ {kj} p _ {ik} ( t)

with the initial conditions p _ {ii} ( 0) = 1 , p _ {ij} ( 0) = 0 , i \neq j , i, j \in S (see also Kolmogorov equation; Kolmogorov–Chapman equation).

If a Markov chain is specified by means of the transition rates (1), then the transition probabilities p _ {ij} ( t) satisfy the conditions

p _ {ij} ( t) \geq 0,\ \ \sum _ {j \in S } p _ {ij} ( t) \leq 1,\ \ i, j \in S,\ \ t > 0;

chains for which \sum _ {j \in S } p _ {ij} ( t) < 1 for certain i \in S and t > 0 are called defective (in this case the solution to (2) is not unique); if \sum _ {j \in S } p _ {ij} ( t) = 1 for all i \in S and t > 0 , the chain is called proper.

Example. The Markov chain \xi ( t) with set of states \{ 0, 1 ,\dots \} and transition densities

\lambda _ {i,i+1} = - \lambda _ {ii} = \lambda _ {i} > 0,\ \ \lambda _ {ij} = 0 \ \ ( i \neq j \neq i+ 1)

(i.e., a pure birth process) is defective if and only if

\sum _ { i= 0} ^ \infty \frac{1}{\lambda _ {i} } < \infty .

Let

\tau _ {0n} = \inf \{ {t > 0 } : {\xi ( t) = n ( \xi ( 0) = 0) } \} ,

\tau = \lim\limits _ {n \rightarrow \infty } \tau _ {0n} ;

then

{\mathsf E} \tau = \sum _ { i= 1} ^ \infty \frac{1}{\lambda _ {i} }

and for {\mathsf E} \tau < \infty one has {\mathsf P} \{ \tau < \infty \} = 1 , i.e. the path of \xi ( t) " tends to infinity in a finite time with probability 1" (see also Branching processes, regularity of).

References

[1] K.L. Chung, "Markov chains with stationary probability densities" , Springer (1967)

Comments

For additional references see also Markov chain; Markov process.

In (1), \lambda _ {ij} \geq 0 if i \neq j and \lambda _ {ii} \leq 0 .

References

[a1] M. Iosifescu, "Finite Markov processes and their applications" , Wiley (1980)
[a2] D. Revuz, "Markov chains" , North-Holland (1984)
How to Cite This Entry:
Transition probabilities. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Transition_probabilities&oldid=12319
This article was adapted from an original article by A.M. Zubkov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article