Difference between revisions of "Duncan-Mortensen-Zakai equation"
m (Automatically changed introduction) |
m (Automatically changed introduction) |
||
Line 6: | Line 6: | ||
Out of 60 formulas, 58 were replaced by TEX code.--> | Out of 60 formulas, 58 were replaced by TEX code.--> | ||
− | {{TEX|semi-auto}}{{TEX| | + | {{TEX|semi-auto}}{{TEX|part}} |
''DMZ equation'' | ''DMZ equation'' | ||
Latest revision as of 17:44, 1 July 2020
DMZ equation
An equation whose solution is the unnormalized conditional probability density function for a non-linear filtering problem. The non-linear filtering problem was motivated by the solution of the linear filtering problem, especially in [a3], where the signal or state process is modeled by the solution of a linear differential equation with a Gaussian white noise input (the formal derivative of a Brownian motion or Wiener process), so the signal process is a Gauss–Markov process. The differential of the observation process, the process from which an estimate is made of the state process, is a linear transformation of the signal plus Gaussian white noise. This linear filtering problem was motivated by one where the infinite past of the observations [a5], [a7] is required. The non-linear filtering problem is described by a signal process that is the solution of a non-linear differential equation with a Gaussian white noise input and an observation process whose differential is a non-linear function of the signal process plus a Gaussian white noise. The precise description of such a filtering problem requires the theory of stochastic differential equations (e.g., [a4]; see also Stochastic differential equation).
Before introducing the Duncan–Mortensen–Zakai equation [a1], [a6], [a8], it is necessary to describe precisely a non-linear filtering problem. A basic filtering problem is described by two stochastic processes (cf. Stochastic process), $( X ( t ) , t \geq 0 )$, which is called the signal or state process and $( Y ( t ) , t \geq 0 )$, which is called the observation process. These two processes satisfy the following stochastic differential equations:
\begin{equation} \tag{a1} d X ( t ) = a ( t , X ( t ) ) d t + b ( t , X ( t ) ) d B ( t ), \end{equation}
\begin{equation} \tag{a2} d Y ( t ) = h ( t , X ( t ) , Y ( t ) ) d t + g ( t , Y ( t ) ) d \widetilde { B } ( t ), \end{equation}
where $t \geq 0$, $X ( 0 ) = x _ { 0 }$, $Y ( 0 ) = 0$, $X ( t ) \in \mathbf{R} ^ { n }$, $Y ( t ) \in \mathbf{R} ^ { m }$, $a : \mathbf{R}_{ +} \times \mathbf{R} ^ { n } \rightarrow \mathbf{R} ^ { n }$, $b : \mathbf{R} _ { + } \times \mathbf{R} ^ { n } \rightarrow \mathcal{L} ( \mathbf{R} ^ { n } , \mathbf{R} ^ { n } )$, $h : \mathbf{R} _ { + } \times \mathbf{R} ^ { n } \times \mathbf{R} ^ { m } \rightarrow \mathbf{R} ^ { m }$, $g : {\bf R} _ { + } \times {\bf R} ^ { m } \rightarrow {\cal L} ( {\bf R} ^ { m } , {\bf R}^ { m } )$, and $( B ( t ) , t \geq 0 )$ and $( \tilde { B } ( t ) , t \geq 0 )$ are independent standard Brownian motions in ${\bf R} ^ { n }$ and $\mathbf{R} ^ { m }$, respectively. The stochastic processes are defined on a fixed probability space $( \Omega , \mathcal F , \mathsf P )$ with a filtration $( \mathcal{F} _ { t } ; t \geq 0 )$ (cf. also Stochastic processes, filtering of). Some smoothness and non-degeneracy assumptions are made on the coefficients $a$, $b$, $h$, and $g$ to ensure existence and uniqueness of the solutions of the stochastic differential equations (a1), (a2) and to ensure that the partial differential operators in the Duncan–Mortensen–Zakai equation are well defined.
A filtering problem is to estimate $\gamma ( X ( t ) )$, where $\gamma : \mathbf{R} ^ { n } \rightarrow \mathbf{R} ^ { k }$, based on the observations of (a2) until time $t$, that is, $\sigma ( Y ( u ) , u \leq t )$ which is the sigma-algebra generated by the random variables $( Y ( u ) , u \leq t )$ (cf. also Optional sigma-algebra).
The conditional probability density of $X ( t )$ given $\sigma ( Y ( u ) , u \leq t )$ represents all of the probabilistic information of $X ( t )$ that is known from $\sigma ( Y ( u ) , u \leq t )$, that is, the observation process. A stochastic partial differential equation can be given for this conditional probability density function, but it satisfies a non-linear stochastic partial differential equation.
Let $( Z ( t ) , t \geq 0 )$ be the process that satisfies the stochastic differential equation
\begin{equation*} d Z ( t ) = g ( t , Z ( t ) ) d \tilde { B } ( t ) \end{equation*}
and $Z ( 0 ) = 0$, which is obtained from (a2) by letting $h \equiv 0$. The measures $\mu_Y$ and $\mu_Z$ for the processes $( Y ( t ) , t \in [ 0 , T ] )$ and $( Z ( t ) , t \in [ 0 , T ] )$ are mutually absolutely continuous (cf. also Absolute continuity) and
\begin{equation*} \frac { d \mu _ { Y } } { d \mu _ { Z } } = \mathsf{E} _ { \mu _ { X } } [ \psi ( T ) ], \end{equation*}
where
\begin{equation*} \psi ( T ) = \end{equation*}
$f = g ^ { T } g$, and $\mathsf{E} _ { \mu _ { X } }$ is integration on the measure for the process $( X ( t ) , t \in [ 0 , T ] )$. Using some elementary properties of conditional mathematical expectation and absolute continuity of measures it follows that
\begin{equation*} \mathsf{E} [ \gamma ( X ( t ) ) | \sigma ( Y ( u , u \leq t ) ] = \frac { \mathsf{E} _ { \mu _ { X } } [ \gamma ( X ( t ) ) \psi ( t ) ] } { \mathsf{E} _ { \mu _ { X } } [ \psi ( t ) ] }. \end{equation*}
Thus, the unnormalized conditional probability density of $X ( t )$ given $\sigma ( Y ( u ) , u \leq t )$ is
\begin{equation*} r ( x , t | x _ { 0 } , \sigma ( Y ( u ) , u \leq t ) ) = \end{equation*}
\begin{equation*} = \mathsf{E} _ { \mu _ { X } } [ \psi ( t ) | X ( t ) = x ] p _ { X } ( 0 , x _ { 0 } ; t , x ) \end{equation*}
where $p_{X} $ is the transition probability density (cf. also Transition probabilities) for the Markov process $( X ( t ) , t \geq 0 )$.
The function $r$ satisfies a linear stochastic partial differential equation that is called the Duncan–Mortensen–Zakai equation [a1], [a6], [a8] and is given by
\begin{equation*} d r = L ^ { * } r + \langle f ^ { - 1 } ( t , Y ( t ) ) g ( t , X ( t ) , Y ( t ) ) , d Y ( t ) \rangle { r }, \end{equation*}
where $L ^ { * }$ is the forward differential operator for the Markov process $( X ( t ) , t \geq 0 )$. The normalization factor for $r$ to obtain the conditional probability density is $\mathsf{E} _ { \mu _ { X } } [ \psi ( t ) ]$. An extensive description of the solution of a non-linear filtering problem can be found in [a2].
References
[a1] | T.E. Duncan, "Probability densities for diffusion processes with applications to nonlinear filtering theory and detection theory" PhD Diss. Stanford Univ. (1967) |
[a2] | G. Kallianpur, "Stochastic filtering theory" , Springer (1980) |
[a3] | R.E. Kalman, R.S. Bucy, "New results in linear filtering and prediction" Trans. ASME Ser. D , 83 (1961) pp. 95–107 |
[a4] | I. Karatzas, S.E. Shreve, "Brownian motion and stochastic calculus" , Springer (1991) (Edition: Second) |
[a5] | A.N. Kolmogorov, "Sur l'interpolation et extrapolation des suites stationnaires" C.R. Acad. Sci. Paris , 208 (1939) pp. 2043 |
[a6] | R.E. Mortensen, "Optimal control of continuous-time stochastic systems" PhD Diss. Univ. California, Berkeley (1966) |
[a7] | N. Wiener, "Extrapolation, interpolation and smoothing of stationary time series with engineering applications" , Technol. Press&Wiley (1949) |
[a8] | M. Zakai, "On the optimal filtering of diffusion processes" ZWvG , 11 (1969) pp. 230–243 |
Duncan-Mortensen-Zakai equation. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Duncan-Mortensen-Zakai_equation&oldid=50564