Namespaces
Variants
Actions

Difference between revisions of "Semi-Markov process"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
A [[Stochastic process|stochastic process]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s0842201.png" /> with a finite or countable set of states <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s0842202.png" />, having stepwise trajectories with jumps at times <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s0842203.png" /> and such that the values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s0842204.png" /> at its jump times form a [[Markov chain|Markov chain]] with transition probabilities
+
<!--
 +
s0842201.png
 +
$#A+1 = 14 n = 0
 +
$#C+1 = 14 : ~/encyclopedia/old_files/data/S084/S.0804220 Semi\AAhMarkov process
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s0842205.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
The distributions of the jump times <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s0842206.png" /> are described in terms of the distribution functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s0842207.png" /> as follows:
+
A [[Stochastic process|stochastic process]]  $  X ( t) $
 +
with a finite or countable set of states  $  N = \{ 1 , 2 , . . . \} $,
 +
having stepwise trajectories with jumps at times  $  0 < \tau _ {1} < \tau _ {2} < \dots $
 +
and such that the values  $  X ( \tau _ {n} ) $
 +
at its jump times form a [[Markov chain|Markov chain]] with transition probabilities
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s0842208.png" /></td> </tr></table>
+
$$
 +
p _ {ij}  = {\mathsf P} \{ X ( \tau _ {n} ) = j \mid  X ( \tau _ {n-} 1 ) = i \} .
 +
$$
 +
 
 +
The distributions of the jump times  $  \tau _ {n} $
 +
are described in terms of the distribution functions  $  F _ {ij} ( x) $
 +
as follows:
 +
 
 +
$$
 +
{\mathsf P} \{ \tau _ {n} - \tau _ {n-} 1 \leq  x ,\
 +
X ( \tau _ {n} ) = j \mid  X ( \tau _ {n-} 1 ) = i \}
 +
= p _ {ij} F _ {ij} ( x)
 +
$$
  
 
(and, moreover, they are independent of the states of the process at earlier moments of time). If
 
(and, moreover, they are independent of the states of the process at earlier moments of time). If
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s0842209.png" /></td> </tr></table>
+
$$
 +
F _ {ij} ^ { \prime } ( x)  = e ^ {- \lambda _ {ij} x } ,\ \
 +
x \geq  0 ,
 +
$$
  
for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s08422010.png" />, then the semi-Markov process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s08422011.png" /> is a continuous-time Markov chain. If all the distributions degenerate to a point, the result is a discrete-time Markov chain.
+
for all $  i , j \in N $,
 +
then the semi-Markov process $  X ( t) $
 +
is a continuous-time Markov chain. If all the distributions degenerate to a point, the result is a discrete-time Markov chain.
  
Semi-Markov processes provide a model for many processes in [[Queueing theory|queueing theory]] and [[Reliability theory|reliability theory]]. Related to semi-Markov processes are Markov renewal processes (see [[Renewal theory|Renewal theory]]), which describe the number of times the process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s08422012.png" /> is in state <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s08422013.png" /> during the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s084/s084220/s08422014.png" />.
+
Semi-Markov processes provide a model for many processes in [[Queueing theory|queueing theory]] and [[Reliability theory|reliability theory]]. Related to semi-Markov processes are Markov renewal processes (see [[Renewal theory|Renewal theory]]), which describe the number of times the process $  X ( t) $
 +
is in state $  i \in N $
 +
during the time $  [ 0 , t ] $.
  
 
In analytic terms, the investigation of semi-Markov processes and Markov renewal processes reduces to a system of integral equations — the renewal equations.
 
In analytic terms, the investigation of semi-Markov processes and Markov renewal processes reduces to a system of integral equations — the renewal equations.
Line 19: Line 50:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  V.S. Korolyuk,  A.F. Turbin,  "Semi-Markov processes and their applications" , Kiev  (1976)  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  V.S. Korolyuk,  A.F. Turbin,  "Semi-Markov processes and their applications" , Kiev  (1976)  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E. Cinlar,  "Introduction to stochastic processes" , Prentice-Hall  (1975)  pp. Chapt. 10</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  E. Cinlar,  "Introduction to stochastic processes" , Prentice-Hall  (1975)  pp. Chapt. 10</TD></TR></table>

Latest revision as of 08:13, 6 June 2020


A stochastic process $ X ( t) $ with a finite or countable set of states $ N = \{ 1 , 2 , . . . \} $, having stepwise trajectories with jumps at times $ 0 < \tau _ {1} < \tau _ {2} < \dots $ and such that the values $ X ( \tau _ {n} ) $ at its jump times form a Markov chain with transition probabilities

$$ p _ {ij} = {\mathsf P} \{ X ( \tau _ {n} ) = j \mid X ( \tau _ {n-} 1 ) = i \} . $$

The distributions of the jump times $ \tau _ {n} $ are described in terms of the distribution functions $ F _ {ij} ( x) $ as follows:

$$ {\mathsf P} \{ \tau _ {n} - \tau _ {n-} 1 \leq x ,\ X ( \tau _ {n} ) = j \mid X ( \tau _ {n-} 1 ) = i \} = p _ {ij} F _ {ij} ( x) $$

(and, moreover, they are independent of the states of the process at earlier moments of time). If

$$ F _ {ij} ^ { \prime } ( x) = e ^ {- \lambda _ {ij} x } ,\ \ x \geq 0 , $$

for all $ i , j \in N $, then the semi-Markov process $ X ( t) $ is a continuous-time Markov chain. If all the distributions degenerate to a point, the result is a discrete-time Markov chain.

Semi-Markov processes provide a model for many processes in queueing theory and reliability theory. Related to semi-Markov processes are Markov renewal processes (see Renewal theory), which describe the number of times the process $ X ( t) $ is in state $ i \in N $ during the time $ [ 0 , t ] $.

In analytic terms, the investigation of semi-Markov processes and Markov renewal processes reduces to a system of integral equations — the renewal equations.

References

[1] V.S. Korolyuk, A.F. Turbin, "Semi-Markov processes and their applications" , Kiev (1976) (In Russian)

Comments

References

[a1] E. Cinlar, "Introduction to stochastic processes" , Prentice-Hall (1975) pp. Chapt. 10
How to Cite This Entry:
Semi-Markov process. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Semi-Markov_process&oldid=16357
This article was adapted from an original article by B.A. Sevast'yanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article