Namespaces
Variants
Actions

Difference between revisions of "Stochastic differential"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
 
Line 1: Line 1:
A random interval function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s0900701.png" /> defined by the formula
+
<!--
 +
s0900701.png
 +
$#A+1 = 49 n = 0
 +
$#C+1 = 49 : ~/encyclopedia/old_files/data/S090/S.0900070 Stochastic differential
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s0900702.png" /></td> </tr></table>
+
{{TEX|auto}}
 +
{{TEX|done}}
  
for every process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s0900703.png" /> in the class of semi-martingales <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s0900704.png" />, with respect to a stochastic basis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s0900705.png" />. In the family of stochastic differentials <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s0900706.png" /> one introduces addition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s0900707.png" />, multiplication by a process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s0900708.png" /> and the product operation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s0900709.png" /> according to the following formulas:
+
A random interval function  $  dX $
 +
defined by the formula
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007010.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007011.png" />;
+
$$
 +
( dX) I  = X _ {t} - X _ {s} ,\ \
 +
I = ( s, t],
 +
$$
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007012.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007013.png" /> (a [[Stochastic integral|stochastic integral]], <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007014.png" /> being a locally bounded predictable process which is adapted to the filtration <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007015.png" />);
+
for every process  $  X = ( X _ {t} , {\mathcal F} _ {t} , {\mathsf P}) $
 +
in the class of semi-martingales  $  S $,
 +
with respect to a stochastic basis  $  ( \Omega , {\mathcal F} , ( {\mathcal F} _ {t} ) _ {t \geq  0 }  , {\mathsf P}) $.
 +
In the family of stochastic differentials  $  dS = \{ {dX } : {X \in S } \} $
 +
one introduces addition  $  ( A) $,
 +
multiplication by a process $  ( M) $
 +
and the product operation  $  ( P) $
 +
according to the following formulas:
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007016.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007017.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007018.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007019.png" /> are the left-continuous versions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007020.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007021.png" />.
+
$  ( A) $
 +
$  dX + dY = d( X+ Y) $;
 +
 
 +
$  ( M) $
 +
$  ( \Phi dX) ( s, t] = \int _ {s}  ^ {t} \Phi  dX $(
 +
a [[Stochastic integral|stochastic integral]],  $  \Phi $
 +
being a locally bounded predictable process which is adapted to the filtration  $  ( {\mathcal F} _ {t} ) _ {t \geq  0 }  $);
 +
 
 +
$  ( P) $
 +
$  dX \cdot dY = d( XY) - X _ {-}  dY - Y _ {-}  dX $,  
 +
where $  X _ {-} $
 +
and $  Y _ {-} $
 +
are the left-continuous versions of $  X $
 +
and $  Y $.
  
 
It then turns out that
 
It then turns out that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007022.png" /></td> </tr></table>
+
$$
 +
( dX \cdot dY) ( s, t]  =   \mathop{\rm l}.i.p. _ {| \Delta | \rightarrow 0 }  \sum _ { i= } 1 ^ { n }
 +
( X _ {t _ {i}  } - X _ {t _ {i-} 1 }  )( Y _ {t _ {i}  } - Y _ {t _ {i-} 1 }  ),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007023.png" /> is an arbitrary decomposition of the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007024.png" />, l.i.p. is the limit in probability, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007025.png" />.
+
where $  \Delta = ( s= t _ {0} < t _ {1} < \dots < t _ {n} = t) $
 +
is an arbitrary decomposition of the interval $  ( s, t] $,
 +
l.i.p. is the limit in probability, and $  | \Delta | = \max  | t _ {i} - t _ {i-} 1 | $.
  
In stochastic analysis, the principle of  "differentiation"  of random functions, or [[Itô formula|Itô formula]], is of importance: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007026.png" /> and the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007027.png" />, then
+
In stochastic analysis, the principle of  "differentiation"  of random functions, or [[Itô formula|Itô formula]], is of importance: If $  X  ^ {1} \dots X  ^ {n} \in S $
 +
and the function $  f = f( x _ {1} \dots x _ {n} ) \in C  ^ {2} $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007028.png" /></td> </tr></table>
+
$$
 +
= f( X  ^ {1} \dots X  ^ {n} )  \in  S ,
 +
$$
  
 
and
 
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007029.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
df( X  ^ {1} \dots X  ^ {n} )  = \sum _ { i= } 1 ^ { n }  \partial  _ {i} f \cdot dX  ^ {i} +
 +
 
 +
\frac{1}{2}
 +
\sum _ { i,j= } 1 ^ { n }  \partial  _ {i} \partial  _ {j} f  \cdot  dX  ^ {i}  dX
 +
^ {j} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007030.png" /> is the partial derivative with respect to the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007031.png" />-th coordinate. In particular, it can be inferred from (1) that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007032.png" />, then
+
where $  \partial  _ {i} $
 +
is the partial derivative with respect to the $  i $-
 +
th coordinate. In particular, it can be inferred from (1) that if $  X \in S $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007033.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
f( X _ {t} )  = f( X _ {0} ) + \int\limits _ { 0 } ^ { t }  f ^ { \prime } ( X _ {s - ) dX _ {s} +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007034.png" /></td> </tr></table>
+
$$
 +
+
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007035.png" /> is the continuous martingale part of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007036.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007037.png" />.
+
\frac{1}{2}
 +
\int\limits _ { 0 } ^ { t }  f ^ { \prime\prime } ( X _ {s - }  ) \
 +
d\langle  X\rangle  ^ {c} + \sum _ {0< s\leq  t } [ f( X _ {s} ) - f( X _ {s - }  )
 +
- f ^ { \prime } ( X _ {s - }  ) \Delta X _ {s} ] ,
 +
$$
 +
 
 +
where  $  X  ^ {c} $
 +
is the continuous martingale part of $  X $,
 +
$  \Delta X _ {s} = X _ {s} - X _ {s - }  $.
  
 
Formula (2) can be given the following form:
 
Formula (2) can be given the following form:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007038.png" /></td> </tr></table>
+
$$
 +
f( X _ {t} )  = f( X _ {0} ) + \int\limits _ { 0 } ^ { t }  f ^ { \prime } ( X _ {s - }  )  dX _ {s} +
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007039.png" /></td> </tr></table>
+
\frac{1}{2}
 +
\int\limits _ { 0 } ^ { t }  f ^ { \prime\prime } ( X _ {s - }  )  d[ X, X] _ {s} +
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007040.png" /> is the quadratic variation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007041.png" />.
+
$$
 +
+
 +
\sum _ { 0< } s\leq  t \left [ f( X _ {s)} - f ( X _ {s - }  )
 +
- f ^ { \prime } ( X _ {s - }  ) \Delta X _ {s} -
 +
\frac{1}{2}
 +
f ^ { \prime\prime } ( X _ {s - }  ) ( \Delta X _ {s} )  ^ {2} \right ] ,
 +
$$
 +
 
 +
where  $  [ X, X] $
 +
is the quadratic variation of $  X $.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  K. Itô,  S. Watanabe,  "Introduction to stochastic differential equations"  K. Itô (ed.) , ''Proc. Int. Symp. Stochastic Differential Equations Kyoto, 1976'' , Wiley  (1978)  pp. I-XXX</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.I. Gikhman,  A.V. Skorokhod,  "Stochastic differential equations and their applications" , Kiev  (1982)  (In Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  K. Itô,  S. Watanabe,  "Introduction to stochastic differential equations"  K. Itô (ed.) , ''Proc. Int. Symp. Stochastic Differential Equations Kyoto, 1976'' , Wiley  (1978)  pp. I-XXX</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.I. Gikhman,  A.V. Skorokhod,  "Stochastic differential equations and their applications" , Kiev  (1982)  (In Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
The product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007042.png" /> is more often written as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007043.png" />, where the so-called  "square bracket"  <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007044.png" /> is the process with finite variation such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007045.png" />. When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007046.png" />, one obtains the quadratic variation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007047.png" /> used at the end of the main article. Actually, it is a probabilistic quadratic variation: when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007048.png" /> is a standard Brownian motion, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090070/s09007049.png" /> is the Lebesgue measure, but the true quadratic variation of the paths is almost surely infinite. See also [[Semi-martingale|Semi-martingale]]; [[Stochastic integral|Stochastic integral]]; [[Stochastic differential equation|Stochastic differential equation]].
+
The product $  dX \cdot dY $
 +
is more often written as $  d[ X, Y] $,  
 +
where the so-called  "square bracket"   $ [ X, Y] $
 +
is the process with finite variation such that $  [ X, Y] _ {t} = X _ {0} Y _ {0} + dX \cdot dY( 0, t] $.  
 +
When $  X= Y $,  
 +
one obtains the quadratic variation $  [ X, X] $
 +
used at the end of the main article. Actually, it is a probabilistic quadratic variation: when $  X $
 +
is a standard Brownian motion, $  d[ X, X] $
 +
is the Lebesgue measure, but the true quadratic variation of the paths is almost surely infinite. See also [[Semi-martingale|Semi-martingale]]; [[Stochastic integral|Stochastic integral]]; [[Stochastic differential equation|Stochastic differential equation]].
  
 
For the study of continuous-path processes evolving on non-flat manifolds the Itô stochastic differential is inconvenient, because the Itô formula (2) is incompatible with the ordinary rules of calculus relating different coordinate systems. A coordinate-free description can be obtained using the Stratonovich differential; see [[#References|[a1]]], [[#References|[a2]]], Chapt. 5, [[#References|[a3]]], and [[Stratonovich integral|Stratonovich integral]].
 
For the study of continuous-path processes evolving on non-flat manifolds the Itô stochastic differential is inconvenient, because the Itô formula (2) is incompatible with the ordinary rules of calculus relating different coordinate systems. A coordinate-free description can be obtained using the Stratonovich differential; see [[#References|[a1]]], [[#References|[a2]]], Chapt. 5, [[#References|[a3]]], and [[Stratonovich integral|Stratonovich integral]].

Latest revision as of 08:23, 6 June 2020


A random interval function $ dX $ defined by the formula

$$ ( dX) I = X _ {t} - X _ {s} ,\ \ I = ( s, t], $$

for every process $ X = ( X _ {t} , {\mathcal F} _ {t} , {\mathsf P}) $ in the class of semi-martingales $ S $, with respect to a stochastic basis $ ( \Omega , {\mathcal F} , ( {\mathcal F} _ {t} ) _ {t \geq 0 } , {\mathsf P}) $. In the family of stochastic differentials $ dS = \{ {dX } : {X \in S } \} $ one introduces addition $ ( A) $, multiplication by a process $ ( M) $ and the product operation $ ( P) $ according to the following formulas:

$ ( A) $ $ dX + dY = d( X+ Y) $;

$ ( M) $ $ ( \Phi dX) ( s, t] = \int _ {s} ^ {t} \Phi dX $( a stochastic integral, $ \Phi $ being a locally bounded predictable process which is adapted to the filtration $ ( {\mathcal F} _ {t} ) _ {t \geq 0 } $);

$ ( P) $ $ dX \cdot dY = d( XY) - X _ {-} dY - Y _ {-} dX $, where $ X _ {-} $ and $ Y _ {-} $ are the left-continuous versions of $ X $ and $ Y $.

It then turns out that

$$ ( dX \cdot dY) ( s, t] = \mathop{\rm l}.i.p. _ {| \Delta | \rightarrow 0 } \sum _ { i= } 1 ^ { n } ( X _ {t _ {i} } - X _ {t _ {i-} 1 } )( Y _ {t _ {i} } - Y _ {t _ {i-} 1 } ), $$

where $ \Delta = ( s= t _ {0} < t _ {1} < \dots < t _ {n} = t) $ is an arbitrary decomposition of the interval $ ( s, t] $, l.i.p. is the limit in probability, and $ | \Delta | = \max | t _ {i} - t _ {i-} 1 | $.

In stochastic analysis, the principle of "differentiation" of random functions, or Itô formula, is of importance: If $ X ^ {1} \dots X ^ {n} \in S $ and the function $ f = f( x _ {1} \dots x _ {n} ) \in C ^ {2} $, then

$$ Y = f( X ^ {1} \dots X ^ {n} ) \in S , $$

and

$$ \tag{1 } df( X ^ {1} \dots X ^ {n} ) = \sum _ { i= } 1 ^ { n } \partial _ {i} f \cdot dX ^ {i} + \frac{1}{2} \sum _ { i,j= } 1 ^ { n } \partial _ {i} \partial _ {j} f \cdot dX ^ {i} dX ^ {j} , $$

where $ \partial _ {i} $ is the partial derivative with respect to the $ i $- th coordinate. In particular, it can be inferred from (1) that if $ X \in S $, then

$$ \tag{2 } f( X _ {t} ) = f( X _ {0} ) + \int\limits _ { 0 } ^ { t } f ^ { \prime } ( X _ {s - } ) dX _ {s} + $$

$$ + \frac{1}{2} \int\limits _ { 0 } ^ { t } f ^ { \prime\prime } ( X _ {s - } ) \ d\langle X\rangle ^ {c} + \sum _ {0< s\leq t } [ f( X _ {s} ) - f( X _ {s - } ) - f ^ { \prime } ( X _ {s - } ) \Delta X _ {s} ] , $$

where $ X ^ {c} $ is the continuous martingale part of $ X $, $ \Delta X _ {s} = X _ {s} - X _ {s - } $.

Formula (2) can be given the following form:

$$ f( X _ {t} ) = f( X _ {0} ) + \int\limits _ { 0 } ^ { t } f ^ { \prime } ( X _ {s - } ) dX _ {s} + \frac{1}{2} \int\limits _ { 0 } ^ { t } f ^ { \prime\prime } ( X _ {s - } ) d[ X, X] _ {s} + $$

$$ + \sum _ { 0< } s\leq t \left [ f( X _ {s)} - f ( X _ {s - } ) - f ^ { \prime } ( X _ {s - } ) \Delta X _ {s} - \frac{1}{2} f ^ { \prime\prime } ( X _ {s - } ) ( \Delta X _ {s} ) ^ {2} \right ] , $$

where $ [ X, X] $ is the quadratic variation of $ X $.

References

[1] K. Itô, S. Watanabe, "Introduction to stochastic differential equations" K. Itô (ed.) , Proc. Int. Symp. Stochastic Differential Equations Kyoto, 1976 , Wiley (1978) pp. I-XXX
[2] I.I. Gikhman, A.V. Skorokhod, "Stochastic differential equations and their applications" , Kiev (1982) (In Russian)

Comments

The product $ dX \cdot dY $ is more often written as $ d[ X, Y] $, where the so-called "square bracket" $ [ X, Y] $ is the process with finite variation such that $ [ X, Y] _ {t} = X _ {0} Y _ {0} + dX \cdot dY( 0, t] $. When $ X= Y $, one obtains the quadratic variation $ [ X, X] $ used at the end of the main article. Actually, it is a probabilistic quadratic variation: when $ X $ is a standard Brownian motion, $ d[ X, X] $ is the Lebesgue measure, but the true quadratic variation of the paths is almost surely infinite. See also Semi-martingale; Stochastic integral; Stochastic differential equation.

For the study of continuous-path processes evolving on non-flat manifolds the Itô stochastic differential is inconvenient, because the Itô formula (2) is incompatible with the ordinary rules of calculus relating different coordinate systems. A coordinate-free description can be obtained using the Stratonovich differential; see [a1], [a2], Chapt. 5, [a3], and Stratonovich integral.

References

[a1] K.D. Elworthy, "Stochastic differential equations on manifolds" , Cambridge Univ. Press (1982)
[a2] N. Ikeda, S. Watanabe, "Stochastic differential equations and diffusion processes" , North-Holland (1989) pp. 97ff
[a3] P.A. Meyer, "Geometrie stochastiques sans larmes" J. Azéma (ed.) M. Yor (ed.) , Sem. Probab. Strassbourg XV , Lect. notes in math. , 850 , Springer (1981) pp. 44–102
[a4] I. Karatzas, S.E. Shreve, "Brownian motion and stochastic calculus" , Springer (1988)
[a5] L.C.G. Rogers, D. Williams, "Diffusion, Markov processes, and martingales" , 2. Itô calculus , Wiley (1987)
[a6] S. Albeverio, J.E. Fenstad, R. Høegh-Krohn, T. Lindstrøm, "Nonstandard methods in stochastic analysis and mathematical physics" , Acad. Press (1986)
How to Cite This Entry:
Stochastic differential. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stochastic_differential&oldid=48846
This article was adapted from an original article by A.N. Shiryaev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article