Namespaces
Variants
Actions

Difference between revisions of "Markov chain, class of positive states of a"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
(latex details)
 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
A set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m0623601.png" /> of states of a homogeneous [[Markov chain|Markov chain]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m0623602.png" /> with state space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m0623603.png" /> such that the transition probabilities
+
<!--
 +
m0623601.png
 +
$#A+1 = 28 n = 0
 +
$#C+1 = 28 : ~/encyclopedia/old_files/data/M062/M.0602360 Markov chain, class of positive states of a
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m0623604.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m0623605.png" /> satisfy
+
A set  $  K $
 +
of states of a homogeneous [[Markov chain|Markov chain]]  $  \xi ( t) $
 +
with state space  $  S $
 +
such that the transition probabilities
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m0623606.png" /></td> </tr></table>
+
$$
 +
p _ {ij} ( t)  = {\mathsf P} \{ \xi ( t) = j \mid  \xi ( 0) = i \}
 +
$$
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m0623607.png" /> for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m0623608.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m0623609.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236010.png" />, and
+
of  $  \xi ( t) $
 +
satisfy
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236011.png" /></td> </tr></table>
+
$$
 +
\sup _ { t }  p _ {ij} ( t)  > 0 \ \
 +
\textrm{ for  any  }  i , j \in K ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236012.png" /> is the return time to the state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236013.png" />:
+
$  p _ {il} ( t) = 0 $
 +
for any  $  i \in K $,
 +
$  l \in S \setminus  K $,
 +
$  t > 0 $,
 +
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236014.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} \tau _ {ii}  < \infty \  \textrm{ for  any  }  i \in K ,
 +
$$
 +
 
 +
where  $  \tau _ {ii} $
 +
is the return time to the state  $  i $:
 +
 
 +
$$
 +
\tau _ {ii}  = \min \
 +
\{ {t > 0 } : {\xi ( t) = i \mid  \xi ( 0) = i } \}
 +
$$
  
 
for a discrete-time Markov chain, and
 
for a discrete-time Markov chain, and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236015.png" /></td> </tr></table>
+
$$
 +
\tau _ {ii}  = \inf \
 +
\{ {t > 0 } : {\xi ( t) = i \mid  \xi ( 0) = i , \xi ( 0 + ) \neq i } \}
 +
$$
  
for a continuous-time Markov chain. When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236016.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236017.png" /> is called a zero class of states (class of zero states).
+
for a continuous-time Markov chain. When $  {\mathsf E} \tau _ {ii} = \infty $,  
 +
$  K $
 +
is called a zero class of states (class of zero states).
  
States in the same positive class <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236018.png" /> have a number of common properties. For example, in the case of discrete time, for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236019.png" /> the limit relation
+
States in the same positive class $  K $
 +
have a number of common properties. For example, in the case of discrete time, for any $  i , j \in K $
 +
the limit relation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236020.png" /></td> </tr></table>
+
$$
 +
\lim\limits _ {n \rightarrow \infty } \
 +
 
 +
\frac{1}{n} \sum_{t=1}^ { n }
 +
p _ {ij} ( t)  = \
 +
p _ {j}  ^ {*}  > 0
 +
$$
  
 
holds; if
 
holds; if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236021.png" /></td> </tr></table>
+
$$
 +
d _ {i}  = \max \
 +
\{ {d } : { {\mathsf P} \{ \tau _ {ii} \
 +
\textrm{ is  divisible  by  }  d \}
 +
= 1 } \}
 +
$$
  
is the period of state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236022.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236023.png" /> for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236024.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236025.png" /> is called the period of the class <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236026.png" />; for any <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236027.png" /> the limit relation
+
is the period of state $  i $,  
 +
then $  d _ {i} = d _ {j} $
 +
for any $  i , j \in K $
 +
and $  d $
 +
is called the period of the class $  K $;  
 +
for any $  i \in K $
 +
the limit relation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062360/m06236028.png" /></td> </tr></table>
+
$$
 +
\lim\limits _ {t \rightarrow \infty }  p _ {ii} ( t d )  = \
 +
d p _ {i}  ^ {*}  > 0
 +
$$
  
 
holds. A discrete-time Markov chain such that all its states form a single positive class of period 1 serves as an example of an ergodic Markov chain (cf. [[Markov chain, ergodic|Markov chain, ergodic]]).
 
holds. A discrete-time Markov chain such that all its states form a single positive class of period 1 serves as an example of an ergodic Markov chain (cf. [[Markov chain, ergodic|Markov chain, ergodic]]).
Line 37: Line 95:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  K.L. Chung,  "Markov chains with stationary transition probabilities" , Springer  (1967)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.L. Doob,  "Stochastic processes" , Wiley  (1953)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  K.L. Chung,  "Markov chains with stationary transition probabilities" , Springer  (1967)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  J.L. Doob,  "Stochastic processes" , Wiley  (1953)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
Cf. also [[Markov chain, class of zero states of a|Markov chain, class of zero states of a]] for additional refences.
 
Cf. also [[Markov chain, class of zero states of a|Markov chain, class of zero states of a]] for additional refences.

Latest revision as of 16:46, 20 January 2024


A set $ K $ of states of a homogeneous Markov chain $ \xi ( t) $ with state space $ S $ such that the transition probabilities

$$ p _ {ij} ( t) = {\mathsf P} \{ \xi ( t) = j \mid \xi ( 0) = i \} $$

of $ \xi ( t) $ satisfy

$$ \sup _ { t } p _ {ij} ( t) > 0 \ \ \textrm{ for any } i , j \in K , $$

$ p _ {il} ( t) = 0 $ for any $ i \in K $, $ l \in S \setminus K $, $ t > 0 $, and

$$ {\mathsf E} \tau _ {ii} < \infty \ \textrm{ for any } i \in K , $$

where $ \tau _ {ii} $ is the return time to the state $ i $:

$$ \tau _ {ii} = \min \ \{ {t > 0 } : {\xi ( t) = i \mid \xi ( 0) = i } \} $$

for a discrete-time Markov chain, and

$$ \tau _ {ii} = \inf \ \{ {t > 0 } : {\xi ( t) = i \mid \xi ( 0) = i , \xi ( 0 + ) \neq i } \} $$

for a continuous-time Markov chain. When $ {\mathsf E} \tau _ {ii} = \infty $, $ K $ is called a zero class of states (class of zero states).

States in the same positive class $ K $ have a number of common properties. For example, in the case of discrete time, for any $ i , j \in K $ the limit relation

$$ \lim\limits _ {n \rightarrow \infty } \ \frac{1}{n} \sum_{t=1}^ { n } p _ {ij} ( t) = \ p _ {j} ^ {*} > 0 $$

holds; if

$$ d _ {i} = \max \ \{ {d } : { {\mathsf P} \{ \tau _ {ii} \ \textrm{ is divisible by } d \} = 1 } \} $$

is the period of state $ i $, then $ d _ {i} = d _ {j} $ for any $ i , j \in K $ and $ d $ is called the period of the class $ K $; for any $ i \in K $ the limit relation

$$ \lim\limits _ {t \rightarrow \infty } p _ {ii} ( t d ) = \ d p _ {i} ^ {*} > 0 $$

holds. A discrete-time Markov chain such that all its states form a single positive class of period 1 serves as an example of an ergodic Markov chain (cf. Markov chain, ergodic).

References

[1] K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1967)
[2] J.L. Doob, "Stochastic processes" , Wiley (1953)

Comments

Cf. also Markov chain, class of zero states of a for additional refences.

How to Cite This Entry:
Markov chain, class of positive states of a. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_chain,_class_of_positive_states_of_a&oldid=14075
This article was adapted from an original article by A.M. Zubkov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article