Namespaces
Variants
Actions

Difference between revisions of "Duncan-Mortensen-Zakai equation"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (AUTOMATIC EDIT (latexlist): Replaced 58 formulas out of 60 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
Line 1: Line 1:
 +
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
 +
 +
Out of 60 formulas, 58 were replaced by TEX code.-->
 +
 +
{{TEX|semi-auto}}{{TEX|partial}}
 
''DMZ equation''
 
''DMZ equation''
  
 
An equation whose solution is the unnormalized [[Conditional probability|conditional probability]] density function for a non-linear filtering problem. The non-linear filtering problem was motivated by the solution of the linear filtering problem, especially in [[#References|[a3]]], where the signal or state process is modeled by the solution of a linear differential equation with a Gaussian white noise input (the formal derivative of a [[Brownian motion|Brownian motion]] or [[Wiener process|Wiener process]]), so the signal process is a Gauss–Markov process. The differential of the observation process, the process from which an estimate is made of the state process, is a linear transformation of the signal plus Gaussian white noise. This linear filtering problem was motivated by one where the infinite past of the observations [[#References|[a5]]], [[#References|[a7]]] is required. The non-linear filtering problem is described by a signal process that is the solution of a non-linear differential equation with a Gaussian white noise input and an observation process whose differential is a non-linear function of the signal process plus a Gaussian white noise. The precise description of such a filtering problem requires the theory of stochastic differential equations (e.g., [[#References|[a4]]]; see also [[Stochastic differential equation|Stochastic differential equation]]).
 
An equation whose solution is the unnormalized [[Conditional probability|conditional probability]] density function for a non-linear filtering problem. The non-linear filtering problem was motivated by the solution of the linear filtering problem, especially in [[#References|[a3]]], where the signal or state process is modeled by the solution of a linear differential equation with a Gaussian white noise input (the formal derivative of a [[Brownian motion|Brownian motion]] or [[Wiener process|Wiener process]]), so the signal process is a Gauss–Markov process. The differential of the observation process, the process from which an estimate is made of the state process, is a linear transformation of the signal plus Gaussian white noise. This linear filtering problem was motivated by one where the infinite past of the observations [[#References|[a5]]], [[#References|[a7]]] is required. The non-linear filtering problem is described by a signal process that is the solution of a non-linear differential equation with a Gaussian white noise input and an observation process whose differential is a non-linear function of the signal process plus a Gaussian white noise. The precise description of such a filtering problem requires the theory of stochastic differential equations (e.g., [[#References|[a4]]]; see also [[Stochastic differential equation|Stochastic differential equation]]).
  
Before introducing the Duncan–Mortensen–Zakai equation [[#References|[a1]]], [[#References|[a6]]], [[#References|[a8]]], it is necessary to describe precisely a non-linear filtering problem. A basic filtering problem is described by two stochastic processes (cf. [[Stochastic process|Stochastic process]]), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d1203001.png" />, which is called the signal or state process and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d1203002.png" />, which is called the observation process. These two processes satisfy the following stochastic differential equations:
+
Before introducing the Duncan–Mortensen–Zakai equation [[#References|[a1]]], [[#References|[a6]]], [[#References|[a8]]], it is necessary to describe precisely a non-linear filtering problem. A basic filtering problem is described by two stochastic processes (cf. [[Stochastic process|Stochastic process]]), $( X ( t ) , t \geq 0 )$, which is called the signal or state process and $( Y ( t ) , t \geq 0 )$, which is called the observation process. These two processes satisfy the following stochastic differential equations:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d1203003.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
\begin{equation} \tag{a1} d X ( t ) = a ( t , X ( t ) ) d t + b ( t , X ( t ) ) d B ( t ), \end{equation}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d1203004.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
\begin{equation} \tag{a2} d Y ( t ) = h ( t , X ( t ) , Y ( t ) ) d t + g ( t , Y ( t ) ) d \widetilde { B } ( t ), \end{equation}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d1203005.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d1203006.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d1203007.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d1203008.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d1203009.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030010.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030011.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030012.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030013.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030014.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030015.png" /> are independent standard Brownian motions in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030016.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030017.png" />, respectively. The stochastic processes are defined on a fixed [[Probability space|probability space]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030018.png" /> with a filtration <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030019.png" /> (cf. also [[Stochastic processes, filtering of|Stochastic processes, filtering of]]). Some smoothness and non-degeneracy assumptions are made on the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030020.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030021.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030022.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030023.png" /> to ensure existence and uniqueness of the solutions of the stochastic differential equations (a1), (a2) and to ensure that the partial differential operators in the Duncan–Mortensen–Zakai equation are well defined.
+
where $t \geq 0$, $X ( 0 ) = x _ { 0 }$, $Y ( 0 ) = 0$, $X ( t ) \in \mathbf{R} ^ { n }$, $Y ( t ) \in \mathbf{R} ^ { m }$, $a : \mathbf{R}_{ +} \times \mathbf{R} ^ { n } \rightarrow \mathbf{R} ^ { n }$, $b : \mathbf{R} _ { + } \times \mathbf{R} ^ { n } \rightarrow \mathcal{L} ( \mathbf{R} ^ { n } , \mathbf{R} ^ { n } )$, $h : \mathbf{R} _ { + } \times \mathbf{R} ^ { n } \times \mathbf{R} ^ { m } \rightarrow \mathbf{R} ^ { m }$, $g : {\bf R} _ { + } \times {\bf R} ^ { m } \rightarrow {\cal L} ( {\bf R} ^ { m } , {\bf R}^ { m } )$, and $( B ( t ) , t \geq 0 )$ and $( \tilde { B } ( t ) , t \geq 0 )$ are independent standard Brownian motions in ${\bf R} ^ { n }$ and $\mathbf{R} ^ { m }$, respectively. The stochastic processes are defined on a fixed [[Probability space|probability space]] $( \Omega , \mathcal F , \mathsf P )$ with a filtration $( \mathcal{F} _ { t } ; t \geq 0 )$ (cf. also [[Stochastic processes, filtering of|Stochastic processes, filtering of]]). Some smoothness and non-degeneracy assumptions are made on the coefficients $a$, $b$, $h$, and $g$ to ensure existence and uniqueness of the solutions of the stochastic differential equations (a1), (a2) and to ensure that the partial differential operators in the Duncan–Mortensen–Zakai equation are well defined.
  
A filtering problem is to estimate <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030024.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030025.png" />, based on the observations of (a2) until time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030026.png" />, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030027.png" /> which is the sigma-algebra generated by the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030028.png" /> (cf. also [[Optional sigma-algebra|Optional sigma-algebra]]).
+
A filtering problem is to estimate $\gamma ( X ( t ) )$, where $\gamma : \mathbf{R} ^ { n } \rightarrow \mathbf{R} ^ { k }$, based on the observations of (a2) until time $t$, that is, $\sigma ( Y ( u ) , u \leq t )$ which is the sigma-algebra generated by the random variables $( Y ( u ) , u \leq t )$ (cf. also [[Optional sigma-algebra|Optional sigma-algebra]]).
  
The conditional probability density of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030029.png" /> given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030030.png" /> represents all of the probabilistic information of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030031.png" /> that is known from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030032.png" />, that is, the observation process. A stochastic partial differential equation can be given for this conditional probability density function, but it satisfies a non-linear stochastic partial differential equation.
+
The conditional probability density of $X ( t )$ given $\sigma ( Y ( u ) , u \leq t )$ represents all of the probabilistic information of $X ( t )$ that is known from $\sigma ( Y ( u ) , u \leq t )$, that is, the observation process. A stochastic partial differential equation can be given for this conditional probability density function, but it satisfies a non-linear stochastic partial differential equation.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030033.png" /> be the process that satisfies the stochastic differential equation
+
Let $( Z ( t ) , t \geq 0 )$ be the process that satisfies the stochastic differential equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030034.png" /></td> </tr></table>
+
\begin{equation*} d Z ( t ) = g ( t , Z ( t ) ) d \tilde { B } ( t ) \end{equation*}
  
and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030035.png" />, which is obtained from (a2) by letting <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030036.png" />. The measures <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030037.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030038.png" /> for the processes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030039.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030040.png" /> are mutually absolutely continuous (cf. also [[Absolute continuity|Absolute continuity]]) and
+
and $Z ( 0 ) = 0$, which is obtained from (a2) by letting $h \equiv 0$. The measures $\mu_Y$ and $\mu_Z$ for the processes $( Y ( t ) , t \in [ 0 , T ] )$ and $( Z ( t ) , t \in [ 0 , T ] )$ are mutually absolutely continuous (cf. also [[Absolute continuity|Absolute continuity]]) and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030041.png" /></td> </tr></table>
+
\begin{equation*} \frac { d \mu _ { Y } } { d \mu _ { Z } } = \mathsf{E} _ { \mu _ { X } } [ \psi ( T ) ], \end{equation*}
  
 
where
 
where
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030042.png" /></td> </tr></table>
+
\begin{equation*} \psi ( T ) = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030043.png" /></td> </tr></table>
+
<table class="eq" style="width:100%;"> <tr><td style="width:94%;text-align:center;" valign="top"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030043.png"/></td> </tr></table>
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030044.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030045.png" /> is integration on the measure for the process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030046.png" />. Using some elementary properties of [[Conditional mathematical expectation|conditional mathematical expectation]] and [[Absolute continuity|absolute continuity]] of measures it follows that
+
$f = g ^ { T } g$, and $\mathsf{E} _ { \mu _ { X } }$ is integration on the measure for the process $( X ( t ) , t \in [ 0 , T ] )$. Using some elementary properties of [[Conditional mathematical expectation|conditional mathematical expectation]] and [[Absolute continuity|absolute continuity]] of measures it follows that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030047.png" /></td> </tr></table>
+
\begin{equation*} \mathsf{E} [ \gamma ( X ( t ) ) | \sigma ( Y ( u , u \leq t ) ] = \frac { \mathsf{E} _ { \mu _ { X } } [ \gamma ( X ( t ) ) \psi ( t ) ] } { \mathsf{E} _ { \mu _ { X } } [ \psi ( t ) ] }. \end{equation*}
  
Thus, the unnormalized conditional probability density of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030048.png" /> given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030049.png" /> is
+
Thus, the unnormalized conditional probability density of $X ( t )$ given $\sigma ( Y ( u ) , u \leq t )$ is
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030050.png" /></td> </tr></table>
+
\begin{equation*} r ( x , t | x _ { 0 } , \sigma ( Y ( u ) , u \leq t ) ) = \end{equation*}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030051.png" /></td> </tr></table>
+
\begin{equation*} = \mathsf{E} _ { \mu _ { X } } [ \psi ( t ) | X ( t ) = x ] p _ { X } ( 0 , x _ { 0 } ; t , x ) \end{equation*}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030052.png" /> is the transition probability density (cf. also [[Transition probabilities|Transition probabilities]]) for the [[Markov process|Markov process]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030053.png" />.
+
where $p_{X} $ is the transition probability density (cf. also [[Transition probabilities|Transition probabilities]]) for the [[Markov process|Markov process]] $( X ( t ) , t \geq 0 )$.
  
The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030054.png" /> satisfies a linear stochastic partial differential equation that is called the Duncan–Mortensen–Zakai equation [[#References|[a1]]], [[#References|[a6]]], [[#References|[a8]]] and is given by
+
The function $r$ satisfies a linear stochastic partial differential equation that is called the Duncan–Mortensen–Zakai equation [[#References|[a1]]], [[#References|[a6]]], [[#References|[a8]]] and is given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030055.png" /></td> </tr></table>
+
\begin{equation*} d r = L ^ { * } r + \langle f ^ { - 1 } ( t , Y ( t ) ) g ( t , X ( t ) , Y ( t ) ) , d Y ( t ) \rangle  { r }, \end{equation*}
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030056.png" /> is the forward differential operator for the Markov process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030057.png" />. The normalization factor for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030058.png" /> to obtain the conditional probability density is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/d/d120/d120300/d12030059.png" />. An extensive description of the solution of a non-linear filtering problem can be found in [[#References|[a2]]].
+
where $L ^ { * }$ is the forward differential operator for the Markov process $( X ( t ) , t \geq 0 )$. The normalization factor for $r$ to obtain the conditional probability density is $\mathsf{E} _ { \mu _ { X } } [ \psi ( t ) ]$. An extensive description of the solution of a non-linear filtering problem can be found in [[#References|[a2]]].
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  T.E. Duncan,  "Probability densities for diffusion processes with applications to nonlinear filtering theory and detection theory"  ''PhD Diss. Stanford Univ.''  (1967)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  G. Kallianpur,  "Stochastic filtering theory" , Springer  (1980)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  R.E. Kalman,  R.S. Bucy,  "New results in linear filtering and prediction"  ''Trans. ASME Ser. D'' , '''83'''  (1961)  pp. 95–107</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  I. Karatzas,  S.E. Shreve,  "Brownian motion and stochastic calculus" , Springer  (1991)  (Edition: Second)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  A.N. Kolmogorov,  "Sur l'interpolation et extrapolation des suites stationnaires"  ''C.R. Acad. Sci. Paris'' , '''208'''  (1939)  pp. 2043</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  R.E. Mortensen,  "Optimal control of continuous-time stochastic systems"  ''PhD Diss. Univ. California, Berkeley''  (1966)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  N. Wiener,  "Extrapolation, interpolation and smoothing of stationary time series with engineering applications" , Technol. Press&amp;Wiley  (1949)</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  M. Zakai,  "On the optimal filtering of diffusion processes"  ''ZWvG'' , '''11'''  (1969)  pp. 230–243</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  T.E. Duncan,  "Probability densities for diffusion processes with applications to nonlinear filtering theory and detection theory"  ''PhD Diss. Stanford Univ.''  (1967)</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  G. Kallianpur,  "Stochastic filtering theory" , Springer  (1980)</td></tr><tr><td valign="top">[a3]</td> <td valign="top">  R.E. Kalman,  R.S. Bucy,  "New results in linear filtering and prediction"  ''Trans. ASME Ser. D'' , '''83'''  (1961)  pp. 95–107</td></tr><tr><td valign="top">[a4]</td> <td valign="top">  I. Karatzas,  S.E. Shreve,  "Brownian motion and stochastic calculus" , Springer  (1991)  (Edition: Second)</td></tr><tr><td valign="top">[a5]</td> <td valign="top">  A.N. Kolmogorov,  "Sur l'interpolation et extrapolation des suites stationnaires"  ''C.R. Acad. Sci. Paris'' , '''208'''  (1939)  pp. 2043</td></tr><tr><td valign="top">[a6]</td> <td valign="top">  R.E. Mortensen,  "Optimal control of continuous-time stochastic systems"  ''PhD Diss. Univ. California, Berkeley''  (1966)</td></tr><tr><td valign="top">[a7]</td> <td valign="top">  N. Wiener,  "Extrapolation, interpolation and smoothing of stationary time series with engineering applications" , Technol. Press&amp;Wiley  (1949)</td></tr><tr><td valign="top">[a8]</td> <td valign="top">  M. Zakai,  "On the optimal filtering of diffusion processes"  ''ZWvG'' , '''11'''  (1969)  pp. 230–243</td></tr></table>

Revision as of 16:56, 1 July 2020

DMZ equation

An equation whose solution is the unnormalized conditional probability density function for a non-linear filtering problem. The non-linear filtering problem was motivated by the solution of the linear filtering problem, especially in [a3], where the signal or state process is modeled by the solution of a linear differential equation with a Gaussian white noise input (the formal derivative of a Brownian motion or Wiener process), so the signal process is a Gauss–Markov process. The differential of the observation process, the process from which an estimate is made of the state process, is a linear transformation of the signal plus Gaussian white noise. This linear filtering problem was motivated by one where the infinite past of the observations [a5], [a7] is required. The non-linear filtering problem is described by a signal process that is the solution of a non-linear differential equation with a Gaussian white noise input and an observation process whose differential is a non-linear function of the signal process plus a Gaussian white noise. The precise description of such a filtering problem requires the theory of stochastic differential equations (e.g., [a4]; see also Stochastic differential equation).

Before introducing the Duncan–Mortensen–Zakai equation [a1], [a6], [a8], it is necessary to describe precisely a non-linear filtering problem. A basic filtering problem is described by two stochastic processes (cf. Stochastic process), $( X ( t ) , t \geq 0 )$, which is called the signal or state process and $( Y ( t ) , t \geq 0 )$, which is called the observation process. These two processes satisfy the following stochastic differential equations:

\begin{equation} \tag{a1} d X ( t ) = a ( t , X ( t ) ) d t + b ( t , X ( t ) ) d B ( t ), \end{equation}

\begin{equation} \tag{a2} d Y ( t ) = h ( t , X ( t ) , Y ( t ) ) d t + g ( t , Y ( t ) ) d \widetilde { B } ( t ), \end{equation}

where $t \geq 0$, $X ( 0 ) = x _ { 0 }$, $Y ( 0 ) = 0$, $X ( t ) \in \mathbf{R} ^ { n }$, $Y ( t ) \in \mathbf{R} ^ { m }$, $a : \mathbf{R}_{ +} \times \mathbf{R} ^ { n } \rightarrow \mathbf{R} ^ { n }$, $b : \mathbf{R} _ { + } \times \mathbf{R} ^ { n } \rightarrow \mathcal{L} ( \mathbf{R} ^ { n } , \mathbf{R} ^ { n } )$, $h : \mathbf{R} _ { + } \times \mathbf{R} ^ { n } \times \mathbf{R} ^ { m } \rightarrow \mathbf{R} ^ { m }$, $g : {\bf R} _ { + } \times {\bf R} ^ { m } \rightarrow {\cal L} ( {\bf R} ^ { m } , {\bf R}^ { m } )$, and $( B ( t ) , t \geq 0 )$ and $( \tilde { B } ( t ) , t \geq 0 )$ are independent standard Brownian motions in ${\bf R} ^ { n }$ and $\mathbf{R} ^ { m }$, respectively. The stochastic processes are defined on a fixed probability space $( \Omega , \mathcal F , \mathsf P )$ with a filtration $( \mathcal{F} _ { t } ; t \geq 0 )$ (cf. also Stochastic processes, filtering of). Some smoothness and non-degeneracy assumptions are made on the coefficients $a$, $b$, $h$, and $g$ to ensure existence and uniqueness of the solutions of the stochastic differential equations (a1), (a2) and to ensure that the partial differential operators in the Duncan–Mortensen–Zakai equation are well defined.

A filtering problem is to estimate $\gamma ( X ( t ) )$, where $\gamma : \mathbf{R} ^ { n } \rightarrow \mathbf{R} ^ { k }$, based on the observations of (a2) until time $t$, that is, $\sigma ( Y ( u ) , u \leq t )$ which is the sigma-algebra generated by the random variables $( Y ( u ) , u \leq t )$ (cf. also Optional sigma-algebra).

The conditional probability density of $X ( t )$ given $\sigma ( Y ( u ) , u \leq t )$ represents all of the probabilistic information of $X ( t )$ that is known from $\sigma ( Y ( u ) , u \leq t )$, that is, the observation process. A stochastic partial differential equation can be given for this conditional probability density function, but it satisfies a non-linear stochastic partial differential equation.

Let $( Z ( t ) , t \geq 0 )$ be the process that satisfies the stochastic differential equation

\begin{equation*} d Z ( t ) = g ( t , Z ( t ) ) d \tilde { B } ( t ) \end{equation*}

and $Z ( 0 ) = 0$, which is obtained from (a2) by letting $h \equiv 0$. The measures $\mu_Y$ and $\mu_Z$ for the processes $( Y ( t ) , t \in [ 0 , T ] )$ and $( Z ( t ) , t \in [ 0 , T ] )$ are mutually absolutely continuous (cf. also Absolute continuity) and

\begin{equation*} \frac { d \mu _ { Y } } { d \mu _ { Z } } = \mathsf{E} _ { \mu _ { X } } [ \psi ( T ) ], \end{equation*}

where

\begin{equation*} \psi ( T ) = \end{equation*}

$f = g ^ { T } g$, and $\mathsf{E} _ { \mu _ { X } }$ is integration on the measure for the process $( X ( t ) , t \in [ 0 , T ] )$. Using some elementary properties of conditional mathematical expectation and absolute continuity of measures it follows that

\begin{equation*} \mathsf{E} [ \gamma ( X ( t ) ) | \sigma ( Y ( u , u \leq t ) ] = \frac { \mathsf{E} _ { \mu _ { X } } [ \gamma ( X ( t ) ) \psi ( t ) ] } { \mathsf{E} _ { \mu _ { X } } [ \psi ( t ) ] }. \end{equation*}

Thus, the unnormalized conditional probability density of $X ( t )$ given $\sigma ( Y ( u ) , u \leq t )$ is

\begin{equation*} r ( x , t | x _ { 0 } , \sigma ( Y ( u ) , u \leq t ) ) = \end{equation*}

\begin{equation*} = \mathsf{E} _ { \mu _ { X } } [ \psi ( t ) | X ( t ) = x ] p _ { X } ( 0 , x _ { 0 } ; t , x ) \end{equation*}

where $p_{X} $ is the transition probability density (cf. also Transition probabilities) for the Markov process $( X ( t ) , t \geq 0 )$.

The function $r$ satisfies a linear stochastic partial differential equation that is called the Duncan–Mortensen–Zakai equation [a1], [a6], [a8] and is given by

\begin{equation*} d r = L ^ { * } r + \langle f ^ { - 1 } ( t , Y ( t ) ) g ( t , X ( t ) , Y ( t ) ) , d Y ( t ) \rangle { r }, \end{equation*}

where $L ^ { * }$ is the forward differential operator for the Markov process $( X ( t ) , t \geq 0 )$. The normalization factor for $r$ to obtain the conditional probability density is $\mathsf{E} _ { \mu _ { X } } [ \psi ( t ) ]$. An extensive description of the solution of a non-linear filtering problem can be found in [a2].

References

[a1] T.E. Duncan, "Probability densities for diffusion processes with applications to nonlinear filtering theory and detection theory" PhD Diss. Stanford Univ. (1967)
[a2] G. Kallianpur, "Stochastic filtering theory" , Springer (1980)
[a3] R.E. Kalman, R.S. Bucy, "New results in linear filtering and prediction" Trans. ASME Ser. D , 83 (1961) pp. 95–107
[a4] I. Karatzas, S.E. Shreve, "Brownian motion and stochastic calculus" , Springer (1991) (Edition: Second)
[a5] A.N. Kolmogorov, "Sur l'interpolation et extrapolation des suites stationnaires" C.R. Acad. Sci. Paris , 208 (1939) pp. 2043
[a6] R.E. Mortensen, "Optimal control of continuous-time stochastic systems" PhD Diss. Univ. California, Berkeley (1966)
[a7] N. Wiener, "Extrapolation, interpolation and smoothing of stationary time series with engineering applications" , Technol. Press&Wiley (1949)
[a8] M. Zakai, "On the optimal filtering of diffusion processes" ZWvG , 11 (1969) pp. 230–243
How to Cite This Entry:
Duncan-Mortensen-Zakai equation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Duncan-Mortensen-Zakai_equation&oldid=50170
This article was adapted from an original article by T.E. Duncan (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article