Namespaces
Variants
Actions

Difference between revisions of "Stochastic integral"

From Encyclopedia of Mathematics
Jump to: navigation, search
(refs format)
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
s0901301.png
 +
$#A+1 = 87 n = 1
 +
$#C+1 = 87 : ~/encyclopedia/old_files/data/S090/S.0900130 Stochastic integral
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
{{MSC|60H05}}
 
{{MSC|60H05}}
  
 
[[Category:Stochastic analysis]]
 
[[Category:Stochastic analysis]]
  
An integral "∫ H dX" with respect to a [[Semi-martingale|semi-martingale]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s0901301.png" /> on some stochastic basis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s0901302.png" />, defined for every locally bounded predictable process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s0901303.png" />. One of the possible constructions of a stochastic integral is as follows. At first a stochastic integral is defined for simple predictable processes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s0901304.png" />, of the form
+
An integral "∫ H dX" with respect to a [[Semi-martingale|semi-martingale]] $  X $
 +
on some stochastic basis $  ( \Omega , {\mathcal F} , ( {\mathcal F} _ {t} ) _ {t} , {\mathsf P} ) $,  
 +
defined for every locally bounded predictable process $  H = ( H _ {t} , {\mathcal F} _ {t} ) $.  
 +
One of the possible constructions of a stochastic integral is as follows. At first a stochastic integral is defined for simple predictable processes $  H $,  
 +
of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s0901305.png" /></td> </tr></table>
+
$$
 +
H _ {t}  = h( \omega ) I _ {( a,b] }  ( t),\  a < b,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s0901306.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s0901307.png" />-measurable. In this case, by the stochastic integral <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s0901308.png" /> (or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s0901309.png" />, or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013010.png" />) one understands the variable
+
where $  h $
 +
is $  {\mathcal F} _ {a} $-
 +
measurable. In this case, by the stochastic integral $  \int _ {0}  ^ {t} H _ {s}  dX _ {s} $(
 +
or $  ( H \cdot X) _ {t} $,  
 +
or $  \int _ {( t,0] }  H _ {s}  dX _ {s} $)  
 +
one understands the variable
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013011.png" /></td> </tr></table>
+
$$
 +
h ( \omega ) ( X _ {b\wedge} t - X _ {a\wedge} t ).
 +
$$
  
The mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013012.png" />, where
+
The mapping $  H \mapsto H \cdot X $,  
 +
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013013.png" /></td> </tr></table>
+
$$
 +
H \cdot X  = ( H \cdot X) _ {t} ,\  t \geq  0,
 +
$$
  
permits an extension (also denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013014.png" />) onto the set of all bounded predictable functions, which possesses the following properties:
+
permits an extension (also denoted by $  H \cdot X $)  
 +
onto the set of all bounded predictable functions, which possesses the following properties:
  
a) the process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013015.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013016.png" />, is continuous from the right and has limits from the left;
+
a) the process $  ( H \cdot X) _ {t} $,  
 +
$  t \geq  0 $,  
 +
is continuous from the right and has limits from the left;
  
b) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013017.png" /> is linear, i.e.
+
b) $  H \mapsto H \cdot X $
 +
is linear, i.e.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013018.png" /></td> </tr></table>
+
$$
 +
( cH _ {1} + H _ {2} ) \cdot X  = c( H _ {1} \cdot X) + H _ {2} \cdot X;
 +
$$
  
c) If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013019.png" /> is a sequence of uniformly-bounded predictable functions, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013020.png" /> is a predictable function and
+
c) If $  \{ H  ^ {n} \} $
 +
is a sequence of uniformly-bounded predictable functions, $  H $
 +
is a predictable function and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013021.png" /></td> </tr></table>
+
$$
 +
\sup _ { s\leq  } t  | H _ {s}  ^ {n} - H _ {s} |  \mathop \rightarrow \limits ^  {\mathsf P}    0,\  t >
 +
0,
 +
$$
  
 
then
 
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013022.png" /></td> </tr></table>
+
$$
 +
( H  ^ {n} \cdot X) _ {t}  \mathop \rightarrow \limits ^  {\mathsf P}    ( H \cdot X) _ {t} ,\  t > 0.
 +
$$
  
The extension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013023.png" /> is therefore unique in the sense that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013024.png" /> is another mapping with the properties a)–c), then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013025.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013026.png" /> are stochastically indistinguishable (cf. [[Stochastic indistinguishability|Stochastic indistinguishability]]).
+
The extension $  H \cdot X $
 +
is therefore unique in the sense that if $  H \mapsto \alpha ( H) $
 +
is another mapping with the properties a)–c), then $  H \cdot X $
 +
and $  \alpha ( H) $
 +
are stochastically indistinguishable (cf. [[Stochastic indistinguishability|Stochastic indistinguishability]]).
  
 
The definition
 
The definition
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013027.png" /></td> </tr></table>
+
$$
 +
( H \cdot X) _ {t}  = h( \omega )( X _ {b\wedge} t - X _ {a\wedge} t ),
 +
$$
  
given for functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013028.png" /> holds for any process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013029.png" />, not only for semi-martingales. The extension <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013030.png" /> with properties a)–c) onto the class of bounded predictable processes is only possible for the case where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013031.png" /> is a semi-martingale. In this sense, the class of semi-martingales is the maximal class for which a stochastic integral with the natural properties a)–c) is defined.
+
given for functions $  H _ {t} = h( \omega ) I _ {( a,b] }  ( t) $
 +
holds for any process $  X $,  
 +
not only for semi-martingales. The extension $  H \cdot X $
 +
with properties a)–c) onto the class of bounded predictable processes is only possible for the case where $  X $
 +
is a semi-martingale. In this sense, the class of semi-martingales is the maximal class for which a stochastic integral with the natural properties a)–c) is defined.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013032.png" /> is a semi-martingale and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013033.png" /> is a Markov time (stopping time), then the "stopped" process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013034.png" /> is also a semi-martingale and for every predictable bounded process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013035.png" />,
+
If $  X $
 +
is a semi-martingale and $  T = T( \omega ) $
 +
is a Markov time (stopping time), then the "stopped" process $  X  ^ {T} = ( X _ {t\wedge} T , {\mathcal F} _ {t} ) $
 +
is also a semi-martingale and for every predictable bounded process $  H $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013036.png" /></td> </tr></table>
+
$$
 +
( H \cdot X)  ^ {T}  = H \cdot X  ^ {T}  = \
 +
( HI _ {[[ 0,T ]] }  ) \cdot X .
 +
$$
  
This property enables one to extend the definition of a stochastic integral to the case of locally-bounded predictable functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013037.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013038.png" /> is a localizing (for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013039.png" />) sequence of Markov times, then the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013040.png" /> are bounded. Hence, the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013041.png" /> are bounded and
+
This property enables one to extend the definition of a stochastic integral to the case of locally-bounded predictable functions $  H $.  
 +
If $  T _ {n} $
 +
is a localizing (for $  H $)  
 +
sequence of Markov times, then the $  H ^ {T _ {n} } $
 +
are bounded. Hence, the $  H \cdot I _ {[[ 0,T _ {n}  ]] } $
 +
are bounded and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013042.png" /></td> </tr></table>
+
$$
 +
[ ( HI _ {[[ 0, T _ {n+} 1 ]] }  ) \cdot X ] ^ {T _ {n} }
 +
$$
  
is stochastically indistinguishable from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013043.png" />. A process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013044.png" />, again called a stochastic integral, therefore exists, such that
+
is stochastically indistinguishable from $  HI _ {[[ 0,T _ {n}  ]] } \cdot X $.  
 +
A process $  H \cdot X $,  
 +
again called a stochastic integral, therefore exists, such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013045.png" /></td> </tr></table>
+
$$
 +
( H \cdot X) ^ {T _ {n} }  = \
 +
HI _ {[[ 0,T _ {n}  ]] } \cdot X,\  n \geq  0.
 +
$$
  
The constructed stochastic integral <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013046.png" /> possesses the following properties: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013047.png" /> is a semi-martingale; the mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013048.png" /> is linear; if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013049.png" /> is a process of locally bounded variation, then so is the integral <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013050.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013051.png" /> then coincides with the Stieltjes integral of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013052.png" /> with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013053.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013054.png" />; <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013055.png" />.
+
The constructed stochastic integral $  H \cdot X $
 +
possesses the following properties: $  H \cdot X $
 +
is a semi-martingale; the mapping $  H \mapsto H \cdot X $
 +
is linear; if $  X $
 +
is a process of locally bounded variation, then so is the integral $  H \cdot X $,  
 +
and $  H \cdot X $
 +
then coincides with the Stieltjes integral of $  H $
 +
with respect to $  dX $;  
 +
$  \Delta ( H \cdot X) = H \Delta X $;  
 +
$  K \cdot ( H \cdot X) = ( KH) \cdot X $.
  
Depending on extra assumptions concerning <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013056.png" />, the stochastic integral <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013057.png" /> can also be defined for broader classes of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013058.png" />. For example, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013059.png" /> is a locally square-integrable martingale, then a stochastic integral <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013060.png" /> (with the properties a)–c)) can be defined for any predictable process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013061.png" /> that possesses the property that the process
+
Depending on extra assumptions concerning $  X $,  
 +
the stochastic integral $  H \cdot X $
 +
can also be defined for broader classes of functions $  H $.  
 +
For example, if $  X $
 +
is a locally square-integrable martingale, then a stochastic integral $  H \cdot X $(
 +
with the properties a)–c)) can be defined for any predictable process $  H $
 +
that possesses the property that the process
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013062.png" /></td> </tr></table>
+
$$
 +
\left ( \int\limits _ { 0 } ^ { t }  H _ {s}  ^ {2}  d\langle  X\rangle _ {s} \right ) _ {t \geq  0 }
 +
$$
  
is locally integrable (here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013063.png" /> is the quadratic variation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013064.png" />, i.e. the predictable increasing process such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013065.png" /> is a local martingale).
+
is locally integrable (here $  \langle  X\rangle $
 +
is the quadratic variation of $  X $,  
 +
i.e. the predictable increasing process such that $  X  ^ {2} - \langle  X\rangle $
 +
is a local martingale).
  
 
====References====
 
====References====
Line 71: Line 160:
 
The result alluded to above, that semi-martingales constitute the widest viable class of stochastic integrators, is the Bichteler–Dellacherie theorem {{Cite|B}}–{{Cite|D}}, and can be formulated as follows {{Cite|P}}, Thm. III.22. Call a process elementary predictable if it has a representation
 
The result alluded to above, that semi-martingales constitute the widest viable class of stochastic integrators, is the Bichteler–Dellacherie theorem {{Cite|B}}–{{Cite|D}}, and can be formulated as follows {{Cite|P}}, Thm. III.22. Call a process elementary predictable if it has a representation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013066.png" /></td> </tr></table>
+
$$
 +
H _ {t}  = H _ {0} I _ {\{ 0 \} }  ( t)+ \sum _ { i= } 1 ^ { n }  H _ {i} I _ {( T _ {i}  , T _ {i+} 1 ] } ( t) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013067.png" /> are stopping times and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013068.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013069.png" />-measurable with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013070.png" /> a.s., <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013071.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013072.png" /> be the set of elementary predictable processes, topologized by uniform convergence in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013073.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013074.png" /> be the set of finite-valued random variables, topologized by convergence in probability. Fix a stochastic process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013075.png" /> and for each stopping time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013076.png" /> define a mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013077.png" /> by
+
where $  0 = T _ {0} \leq  T _ {1} \leq  \dots \leq  T _ {n+} 1 < \infty $
 +
are stopping times and $  H _ {i} $
 +
is $  {\mathcal F} _ {T _ {i}  } $-
 +
measurable with $  | H _ {i} | < \infty $
 +
a.s., $  0< i< n $.  
 +
Let $  E $
 +
be the set of elementary predictable processes, topologized by uniform convergence in $  ( t, \omega ) $.  
 +
Let $  L  ^ {0} $
 +
be the set of finite-valued random variables, topologized by convergence in probability. Fix a stochastic process $  X $
 +
and for each stopping time $  T $
 +
define a mapping $  I _ {X}  ^ {T} :  E \rightarrow L  ^ {0} $
 +
by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013078.png" /></td> </tr></table>
+
$$
 +
I _ {X}  ^ {T} ( H)  = H _ {0} X _ {0}  ^ {T} + \sum _ { i= } 1 ^ { n }  H _ {i} ( X _ {T _ {i+} 1 }  ^ {T} - X _ {T _ {i}  }  ^ {T} ),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013079.png" /> denotes the process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013080.png" />. Say that "X has the property (C)" if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013081.png" /> is continuous for all stopping times.
+
where $  X  ^ {T} $
 +
denotes the process $  X _ {t}  ^ {T} = X _ {t\wedge T }  $.  
 +
Say that "X has the property (C)" if $  I _ {X}  ^ {T} $
 +
is continuous for all stopping times.
  
The Bichteler–Dellacherie theorem: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013082.png" /> has property (C) if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013083.png" /> is a semi-martingale.
+
The Bichteler–Dellacherie theorem: $  X $
 +
has property (C) if and only if $  X $
 +
is a semi-martingale.
  
Since the topology on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013084.png" /> is very strong and that on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013085.png" /> very weak, property (C) is a minimal requirement if the definition of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013086.png" /> is to be extended beyond <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090130/s09013087.png" />.
+
Since the topology on $  E $
 +
is very strong and that on $  L  ^ {0} $
 +
very weak, property (C) is a minimal requirement if the definition of $  I _ {X}  ^ {T} $
 +
is to be extended beyond $  E $.
  
 
It is possible to use property (C) as the definition of a semi-martingale, and to develop the theory of stochastic integration from this point of view {{Cite|P}}. There are many excellent textbook expositions of stochastic integration from the conventional point of view; see, e.g., {{Cite|CW}}–{{Cite|RW}}.
 
It is possible to use property (C) as the definition of a semi-martingale, and to develop the theory of stochastic integration from this point of view {{Cite|P}}. There are many excellent textbook expositions of stochastic integration from the conventional point of view; see, e.g., {{Cite|CW}}–{{Cite|RW}}.

Revision as of 08:23, 6 June 2020


2020 Mathematics Subject Classification: Primary: 60H05 [MSN][ZBL]

An integral "∫ H dX" with respect to a semi-martingale $ X $ on some stochastic basis $ ( \Omega , {\mathcal F} , ( {\mathcal F} _ {t} ) _ {t} , {\mathsf P} ) $, defined for every locally bounded predictable process $ H = ( H _ {t} , {\mathcal F} _ {t} ) $. One of the possible constructions of a stochastic integral is as follows. At first a stochastic integral is defined for simple predictable processes $ H $, of the form

$$ H _ {t} = h( \omega ) I _ {( a,b] } ( t),\ a < b, $$

where $ h $ is $ {\mathcal F} _ {a} $- measurable. In this case, by the stochastic integral $ \int _ {0} ^ {t} H _ {s} dX _ {s} $( or $ ( H \cdot X) _ {t} $, or $ \int _ {( t,0] } H _ {s} dX _ {s} $) one understands the variable

$$ h ( \omega ) ( X _ {b\wedge} t - X _ {a\wedge} t ). $$

The mapping $ H \mapsto H \cdot X $, where

$$ H \cdot X = ( H \cdot X) _ {t} ,\ t \geq 0, $$

permits an extension (also denoted by $ H \cdot X $) onto the set of all bounded predictable functions, which possesses the following properties:

a) the process $ ( H \cdot X) _ {t} $, $ t \geq 0 $, is continuous from the right and has limits from the left;

b) $ H \mapsto H \cdot X $ is linear, i.e.

$$ ( cH _ {1} + H _ {2} ) \cdot X = c( H _ {1} \cdot X) + H _ {2} \cdot X; $$

c) If $ \{ H ^ {n} \} $ is a sequence of uniformly-bounded predictable functions, $ H $ is a predictable function and

$$ \sup _ { s\leq } t | H _ {s} ^ {n} - H _ {s} | \mathop \rightarrow \limits ^ {\mathsf P} 0,\ t > 0, $$

then

$$ ( H ^ {n} \cdot X) _ {t} \mathop \rightarrow \limits ^ {\mathsf P} ( H \cdot X) _ {t} ,\ t > 0. $$

The extension $ H \cdot X $ is therefore unique in the sense that if $ H \mapsto \alpha ( H) $ is another mapping with the properties a)–c), then $ H \cdot X $ and $ \alpha ( H) $ are stochastically indistinguishable (cf. Stochastic indistinguishability).

The definition

$$ ( H \cdot X) _ {t} = h( \omega )( X _ {b\wedge} t - X _ {a\wedge} t ), $$

given for functions $ H _ {t} = h( \omega ) I _ {( a,b] } ( t) $ holds for any process $ X $, not only for semi-martingales. The extension $ H \cdot X $ with properties a)–c) onto the class of bounded predictable processes is only possible for the case where $ X $ is a semi-martingale. In this sense, the class of semi-martingales is the maximal class for which a stochastic integral with the natural properties a)–c) is defined.

If $ X $ is a semi-martingale and $ T = T( \omega ) $ is a Markov time (stopping time), then the "stopped" process $ X ^ {T} = ( X _ {t\wedge} T , {\mathcal F} _ {t} ) $ is also a semi-martingale and for every predictable bounded process $ H $,

$$ ( H \cdot X) ^ {T} = H \cdot X ^ {T} = \ ( HI _ {[[ 0,T ]] } ) \cdot X . $$

This property enables one to extend the definition of a stochastic integral to the case of locally-bounded predictable functions $ H $. If $ T _ {n} $ is a localizing (for $ H $) sequence of Markov times, then the $ H ^ {T _ {n} } $ are bounded. Hence, the $ H \cdot I _ {[[ 0,T _ {n} ]] } $ are bounded and

$$ [ ( HI _ {[[ 0, T _ {n+} 1 ]] } ) \cdot X ] ^ {T _ {n} } $$

is stochastically indistinguishable from $ HI _ {[[ 0,T _ {n} ]] } \cdot X $. A process $ H \cdot X $, again called a stochastic integral, therefore exists, such that

$$ ( H \cdot X) ^ {T _ {n} } = \ HI _ {[[ 0,T _ {n} ]] } \cdot X,\ n \geq 0. $$

The constructed stochastic integral $ H \cdot X $ possesses the following properties: $ H \cdot X $ is a semi-martingale; the mapping $ H \mapsto H \cdot X $ is linear; if $ X $ is a process of locally bounded variation, then so is the integral $ H \cdot X $, and $ H \cdot X $ then coincides with the Stieltjes integral of $ H $ with respect to $ dX $; $ \Delta ( H \cdot X) = H \Delta X $; $ K \cdot ( H \cdot X) = ( KH) \cdot X $.

Depending on extra assumptions concerning $ X $, the stochastic integral $ H \cdot X $ can also be defined for broader classes of functions $ H $. For example, if $ X $ is a locally square-integrable martingale, then a stochastic integral $ H \cdot X $( with the properties a)–c)) can be defined for any predictable process $ H $ that possesses the property that the process

$$ \left ( \int\limits _ { 0 } ^ { t } H _ {s} ^ {2} d\langle X\rangle _ {s} \right ) _ {t \geq 0 } $$

is locally integrable (here $ \langle X\rangle $ is the quadratic variation of $ X $, i.e. the predictable increasing process such that $ X ^ {2} - \langle X\rangle $ is a local martingale).

References

[J] J. Jacod, "Calcul stochastique et problèmes de martingales" , Lect. notes in math. , 714 , Springer (1979) MR0542115 Zbl 0414.60053
[DM] C. Dellacherie, P.A. Meyer, "Probabilities and potential" , A-C , North-Holland (1978–1988) (Translated from French) MR0939365 MR0898005 MR0727641 MR0745449 MR0566768 MR0521810 Zbl 0716.60001 Zbl 0494.60002 Zbl 0494.60001
[LS] R.Sh. Liptser, A.N. Shiryayev, "Theory of martingales" , Kluwer (1989) (Translated from Russian) MR1022664 Zbl 0728.60048

Comments

The result alluded to above, that semi-martingales constitute the widest viable class of stochastic integrators, is the Bichteler–Dellacherie theorem [B][D], and can be formulated as follows [P], Thm. III.22. Call a process elementary predictable if it has a representation

$$ H _ {t} = H _ {0} I _ {\{ 0 \} } ( t)+ \sum _ { i= } 1 ^ { n } H _ {i} I _ {( T _ {i} , T _ {i+} 1 ] } ( t) , $$

where $ 0 = T _ {0} \leq T _ {1} \leq \dots \leq T _ {n+} 1 < \infty $ are stopping times and $ H _ {i} $ is $ {\mathcal F} _ {T _ {i} } $- measurable with $ | H _ {i} | < \infty $ a.s., $ 0< i< n $. Let $ E $ be the set of elementary predictable processes, topologized by uniform convergence in $ ( t, \omega ) $. Let $ L ^ {0} $ be the set of finite-valued random variables, topologized by convergence in probability. Fix a stochastic process $ X $ and for each stopping time $ T $ define a mapping $ I _ {X} ^ {T} : E \rightarrow L ^ {0} $ by

$$ I _ {X} ^ {T} ( H) = H _ {0} X _ {0} ^ {T} + \sum _ { i= } 1 ^ { n } H _ {i} ( X _ {T _ {i+} 1 } ^ {T} - X _ {T _ {i} } ^ {T} ), $$

where $ X ^ {T} $ denotes the process $ X _ {t} ^ {T} = X _ {t\wedge T } $. Say that "X has the property (C)" if $ I _ {X} ^ {T} $ is continuous for all stopping times.

The Bichteler–Dellacherie theorem: $ X $ has property (C) if and only if $ X $ is a semi-martingale.

Since the topology on $ E $ is very strong and that on $ L ^ {0} $ very weak, property (C) is a minimal requirement if the definition of $ I _ {X} ^ {T} $ is to be extended beyond $ E $.

It is possible to use property (C) as the definition of a semi-martingale, and to develop the theory of stochastic integration from this point of view [P]. There are many excellent textbook expositions of stochastic integration from the conventional point of view; see, e.g., [CW][RW].

References

[B] K. Bichteler, "Stochastic integrators" Bull. Amer. Math. Soc. , 1 (1979) pp. 761–765 MR0537627 Zbl 0416.60066
[B2] K. Bichteler, "Stochastic integrators and the theory of semimartingales" Ann. Probab. , 9 (1981) pp. 49–89
[D] C. Dellacherie, "Un survol de la théorie de l'intégrale stochastique" Stoch. Proc. & Appl. , 10 (1980) pp. 115–144 MR0587420 MR0562680 MR0577985 Zbl 0436.60043 Zbl 0429.60053 Zbl 0427.60055
[P] P. Protter, "Stochastic integration and differential equations" , Springer (1990) MR1037262 Zbl 0694.60047
[CW] K.L. Chung, R.J. Williams, "Introduction to stochastic integration" , Birkhäuser (1990) MR1102676 Zbl 0725.60050
[E] R.J. Elliott, "Stochastic calculus and applications" , Springer (1982) MR0678919 Zbl 0503.60062
[KS] I. Karatzas, S.E. Shreve, "Brownian motion and stochastic calculus" , Springer (1988) MR0917065 Zbl 0638.60065
[RW] L.C.G. Rogers, D. Williams, "Diffusions, Markov processes and martingales" , II. Ito calculus , Wiley (1987) MR0921238 Zbl 0627.60001
[McK] H.P. McKean jr., "Stochastic integrals" , Acad. Press (1969)
[MP] M. Metivier, J. Pellaumail, "Stochastic integration" , Acad. Press (1980) MR0578177 Zbl 0463.60004
[McSh] E.J. McShane, "Stochastic calculus and stochastic models" , Acad. Press (1974)
[R] M.M. Rao, "Stochastic processes and integration" , Sijthoff & Noordhoff (1979) MR0546709 Zbl 0429.60001
[SV] D.W. Stroock, S.R.S. Varadhan, "Multidimensional diffusion processes" , Springer (1979) MR0532498 Zbl 0426.60069
[K] P.E. Kopp, "Martingales and stochastic integrals" , Cambridge Univ. Press (1984) MR0774050 Zbl 0537.60047
[F] M. Fukushima, "Dirichlet forms and Markov processes" , North-Holland (1980) MR0569058 Zbl 0422.31007
[AFHL] S. Albeverio, J.E. Fenstad, R. Høegh-Krohn, T. Lindstrøm, "Nonstandard methods in stochastic analysis and mathematical physics" , Acad. Press (1986) MR0859372 Zbl 0605.60005
How to Cite This Entry:
Stochastic integral. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stochastic_integral&oldid=48851
This article was adapted from an original article by A.N. Shiryaev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article