Namespaces
Variants
Actions

Difference between revisions of "Markov property"

From Encyclopedia of Mathematics
Jump to: navigation, search
(refs format)
m (tex encoded by computer)
Line 1: Line 1:
''for a real-valued [[Stochastic process|stochastic process]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m0625101.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m0625102.png" />''
+
<!--
 +
m0625101.png
 +
$#A+1 = 21 n = 0
 +
$#C+1 = 21 : ~/encyclopedia/old_files/data/M062/M.0602510 Markov property
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
 +
''for a real-valued [[Stochastic process|stochastic process]] $  X ( t) $,  
 +
$  t \in T \subset  \mathbf R $''
  
 
{{MSC|60Jxx}}
 
{{MSC|60Jxx}}
Line 5: Line 18:
 
[[Category:Markov processes]]
 
[[Category:Markov processes]]
  
The property that for any set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m0625103.png" /> of times from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m0625104.png" /> and any Borel set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m0625105.png" />,
+
The property that for any set $  t _ {1} < \dots < t _ {n+} 1 $
 +
of times from $  T $
 +
and any Borel set $  B $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m0625106.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
$$ \tag{* }
 +
{\mathsf P} \{ X ( t _ {n+} 1 )
 +
\in B \mid  X ( t _ {n} ) \dots X ( t _ {1} ) \} =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m0625107.png" /></td> </tr></table>
+
$$
 +
= \
 +
{\mathsf P} \{ X ( t _ {n+} 1 ) \in B \mid  X ( t _ {n} ) \}
 +
$$
  
with probability 1, that is, the conditional probability distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m0625108.png" /> given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m0625109.png" /> coincides (almost certainly) with the conditional distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251010.png" /> given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251011.png" />. This can be interpreted as independence of the "future" <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251012.png" /> and the "past" <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251013.png" /> given the fixed "present" <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251014.png" />. Stochastic processes satisfying the property (*) are called Markov processes (cf. [[Markov process|Markov process]]). The Markov property has (under certain additional assumptions) a stronger version, known as the "strong Markov property" . In discrete time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251015.png" /> the strong Markov property, which is always true for (Markov) sequences satisfying (*), means that for each stopping time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251016.png" /> (relative to the family of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251017.png" />-algebras <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251018.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251019.png" />), with probability one
+
with probability 1, that is, the conditional probability distribution of $  X ( t _ {n+} 1 ) $
 +
given $  X ( t _ {n} ) \dots X ( t _ {1} ) $
 +
coincides (almost certainly) with the conditional distribution of $  X ( t _ {n+} 1 ) $
 +
given $  X ( t _ {n} ) $.  
 +
This can be interpreted as independence of the "future" $  X ( t _ {n+} 1 ) $
 +
and the "past" $  ( X ( t _ {n-} 1 ) \dots X ( t _ {1} ) ) $
 +
given the fixed "present" $  X ( t _ {n} ) $.  
 +
Stochastic processes satisfying the property (*) are called Markov processes (cf. [[Markov process|Markov process]]). The Markov property has (under certain additional assumptions) a stronger version, known as the "strong Markov property" . In discrete time $  T = \{ 1 , 2 ,\dots \} $
 +
the strong Markov property, which is always true for (Markov) sequences satisfying (*), means that for each stopping time $  \tau $(
 +
relative to the family of $  \sigma $-
 +
algebras $  ( F _ {n} , n \geq  1) $,  
 +
$  F _ {n} = \sigma \{  \omega  : {X ( 1) \dots X ( n) } \} $),  
 +
with probability one
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251020.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \{ X ( \tau + 1 ) \in B \mid  X ( \tau ) \dots X ( 1) \} =
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062510/m06251021.png" /></td> </tr></table>
+
$$
 +
= \
 +
{\mathsf P} \{ X ( \tau + 1 ) \in B \mid  X ( \tau ) \} .
 +
$$
  
 
====References====
 
====References====
Line 23: Line 61:
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====

Revision as of 07:59, 6 June 2020


for a real-valued stochastic process $ X ( t) $, $ t \in T \subset \mathbf R $

2020 Mathematics Subject Classification: Primary: 60Jxx [MSN][ZBL]

The property that for any set $ t _ {1} < \dots < t _ {n+} 1 $ of times from $ T $ and any Borel set $ B $,

$$ \tag{* } {\mathsf P} \{ X ( t _ {n+} 1 ) \in B \mid X ( t _ {n} ) \dots X ( t _ {1} ) \} = $$

$$ = \ {\mathsf P} \{ X ( t _ {n+} 1 ) \in B \mid X ( t _ {n} ) \} $$

with probability 1, that is, the conditional probability distribution of $ X ( t _ {n+} 1 ) $ given $ X ( t _ {n} ) \dots X ( t _ {1} ) $ coincides (almost certainly) with the conditional distribution of $ X ( t _ {n+} 1 ) $ given $ X ( t _ {n} ) $. This can be interpreted as independence of the "future" $ X ( t _ {n+} 1 ) $ and the "past" $ ( X ( t _ {n-} 1 ) \dots X ( t _ {1} ) ) $ given the fixed "present" $ X ( t _ {n} ) $. Stochastic processes satisfying the property (*) are called Markov processes (cf. Markov process). The Markov property has (under certain additional assumptions) a stronger version, known as the "strong Markov property" . In discrete time $ T = \{ 1 , 2 ,\dots \} $ the strong Markov property, which is always true for (Markov) sequences satisfying (*), means that for each stopping time $ \tau $( relative to the family of $ \sigma $- algebras $ ( F _ {n} , n \geq 1) $, $ F _ {n} = \sigma \{ \omega : {X ( 1) \dots X ( n) } \} $), with probability one

$$ {\mathsf P} \{ X ( \tau + 1 ) \in B \mid X ( \tau ) \dots X ( 1) \} = $$

$$ = \ {\mathsf P} \{ X ( \tau + 1 ) \in B \mid X ( \tau ) \} . $$

References

[GS] I.I. Gihman, A.V. Skorohod, "The theory of stochastic processes" , 2 , Springer (1975) (Translated from Russian) MR0375463 Zbl 0305.60027

Comments

References

[C] K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1960) MR0116388 Zbl 0092.34304
[Do] J.L. Doob, "Stochastic processes" , Wiley (1953) MR1570654 MR0058896 Zbl 0053.26802
[Dy] E.B. Dynkin, "Markov processes" , 1 , Springer (1965) (Translated from Russian) MR0193671 Zbl 0132.37901
[K] T.G. Kurtz, "Markov processes" , Wiley (1986) MR0838085 Zbl 0592.60049
[F] W. Feller, "An introduction to probability theory and its applications", 1–2 , Wiley (1966)
[Le] P. Lévy, "Processus stochastiques et mouvement Brownien" , Gauthier-Villars (1965) MR0190953 Zbl 0137.11602
[Lo] M. Loève, "Probability theory" , II , Springer (1978) MR0651017 MR0651018 Zbl 0385.60001
How to Cite This Entry:
Markov property. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Markov_property&oldid=26609
This article was adapted from an original article by A.N. Shiryaev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article