# Accessible random variable

Jump to: navigation, search

A key concept in one of the mathematical models of the heuristic idea of white noise.

White noise should be a family $( X _ {t} ) _ {t \in T }$ of independent identically distributed random variables. If the parameter set $T$ is the positive integers, there is no problem with this; indeed, sequences of independent identically distributed random variables appear repeatedly in probability and mathematical statistics. If $T = [ 0, \infty )$, it would seem natural to try the same thing, that is, to take $( X _ {t} ) _ {t \geq 0 }$ to be a family of independent identically distributed random variables. However, the independence means that $X _ {t}$ gives no information about $X _ {s}$ no matter how close $s$ is to $t$. This suggests that the process will be extremely irregular. Indeed, it can be shown that there is no measurable stochastic process of the type just described, see [a5]. Thus, one must look for other methods of modelling white noise in the case of a continuous parameter.

The best known method is instead to work with Brownian motion, regarding it as the integral of white noise. This is a beautiful, but highly technical, subject involving Itô stochastic integrals (cf. also Stochastic integral) and stochastic differential equations (cf. Stochastic differential equation) as well as the theory of continuous parameter semi-martingales (cf. Martingale), see [a3]. Another approach is to define white noise as the derivative of Brownian motion. Since Brownian paths are almost surely nowhere differentiable, the derivatives are interpreted in the distributional sense (cf. also Generalized function, derivative of a); see [a8].

The starting point of the approach to white noise involving accessible random variables is the closest to what is observed physically (at least in filtering problems in engineering).

Let $H$ be a separable infinite-dimensional Hilbert space over $\mathbf R$ and let ${\mathcal P}$ be the class of orthogonal projections on $H$ with finite-dimensional range. For $\pi \in {\mathcal P}$, let

$${\mathcal C} _ \pi = \left \{ {\pi ^ {- 1 } B } : {B \in {\mathcal B} ( \pi H ) } \right \} ,$$

where ${\mathcal B} ( \pi H )$ denotes the Borel subsets of $\pi H$. Sets in ${\mathcal C} _ \pi$ are called cylinder sets with base in $\pi H$. Let ${\mathcal C}$ be the class of all cylinder sets in $H$, that is,

$${\mathcal C} = \cup _ {\pi \in {\mathcal P} } {\mathcal C} _ \pi .$$

Each ${\mathcal C} _ \pi$ is a $\sigma$- field, but ${\mathcal C}$ is only a field. The canonical Gaussian measure $m$ on ${\mathcal C}$ can be described as follows: Let $C \in {\mathcal C}$ be given by $C = \{ {h \in H } : {( ( h,e _ {1} ) \dots ( h,e _ {n} ) ) \in B } \}$, where $n \geq 1$, $\{ e _ {1} \dots e _ {n} \}$ is an orthonormal set in $H$ and $B \in {\mathcal B} ( \mathbf R ^ {n} )$. Then

$$m ( C ) = ( 2 \pi ) ^ {- {n / 2 } } \int\limits _ { B } { { \mathop{\rm exp} } \left ( - { \frac{1}{2} } \sum _ {i = 1 } ^ { n } x _ {I} ^ {2} \right ) } {d x _ {1} \dots d x _ {n} } .$$

Note that the integrand above is the density function associated with $n$ independent random variables, each distributed normally with mean $0$ and variance $1$. Thus, the canonical Gauss measure is a straightforward infinite-dimensional analogue of the measure on $\mathbf R ^ {n}$ obtained by taking the product of $n$ independent standard normal distributions on $\mathbf R$. In fact, $m$ is not actually a measure, since it is only finitely additive on ${\mathcal C}$; it is, however, countably additive (cf. Countably-additive set function) on each ${\mathcal C} _ \pi$.

The "measure" $m$ provides a simple and appealing starting point for an approach to Gaussian white noise, but the lack of countable additivity raises questions about the mathematical effectiveness of the model. This issue can be got around to a large extent by associating a true probability space $( \Omega, {\mathcal A}, {\mathsf P} )$ with the space $( H, {\mathcal C},m )$ and making use of the countable additivity of ${\mathsf P}$. It is in this context that "accessible random variables" arise.

A representation of $m$ is a pair $( L, {\mathsf P} )$, where ${\mathsf P}$ is a (countably additive) probability measure on a measurable space $( \Omega, {\mathcal A} )$ and $L$ is a mapping (actually, an equivalence class of mappings, see [a6]) from $H$ into a space of $\mathbf R$- valued random variables on $( \Omega, {\mathcal A}, {\mathsf P} )$ such that $L$ is linear, and such that, for all $C \in {\mathcal C}$,

$$m ( C ) = {\mathsf P} \left \{ {w \in \Omega } : {( L ( h _ {1} ) ( w ) \dots L ( h _ {j} ) ( w ) ) \in B } \right \} ,$$

with $h _ {1} \dots h _ {j}$ in $H$ and $B \in {\mathcal B} ( \mathbf R ^ {j} )$. The mapping $L$ is linear in the sense that for any $h _ {1} ,h _ {2}$ in $H$ and $a _ {1} ,a _ {2} \in \mathbf R$, $L ( a _ {1} h _ {1} + a _ {2} h _ {2} ) = a _ {1} L ( h _ {1} ) + a _ {2} L ( h _ {2} )$ ${\mathsf P}$- almost surely. A representation of $m$ always exists, see [a6].

The following is an example of a representation: Take $H = L _ {2} [ 0, \infty )$, $\Omega = C _ {0} [ 0, \infty )$( the continuous functions on $[ 0, \infty )$ that vanish at $0$), ${\mathcal A} = {\mathcal B} ( C _ {0} [ 0, \infty ) )$, and let ${\mathsf P}$ be Wiener measure on ${\mathcal A}$. Finally, given $\phi \in H$, let $L ( \phi ) ( \omega ) = \int _ {0} ^ \infty {\phi ( t ) } {d w _ {t} }$ be the stochastic integral of $\phi$ with respect to the Wiener path $w _ {t}$.

An accessible random variable will be defined in terms of Borel cylinder functions, a special class of accessible random variables. A function $f : H \rightarrow \mathbf R$ is called a Borel cylinder function if it can be written as $f ( h ) = g ( ( h,h _ {1} ) \dots ( h,h _ {j} ) )$ for some $j \geq 1$ and $h _ {1} \dots h _ {j}$ in $H$, where $g : {\mathbf R ^ {j} } \rightarrow \mathbf R$ is Borel measurable. One defines $Rf$, the lifting of $f$, by the formula

$$R ( f ) ( \cdot ) = g ( L ( h _ {1} ) ( \cdot ) \dots L ( h _ {j} ) ( \cdot ) ) .$$

The space ${\mathcal L} ^ {0} ( H,m )$ will consist of the functions $f : H \rightarrow \mathbf R$ satisfying the condition: For all $\pi \in {\mathcal P}$ the function $f \circ \pi$ is ${\mathcal C} _ \pi$- measurable, and for all sequences $\{ \pi _ {n} \}$ from ${\mathcal P}$ converging strongly to the identity, the sequence $\{ R ( f \circ \pi _ {n} ) \}$ is Cauchy in ${\mathsf P}$- probability. One can show that all such sequences converge in ${\mathsf P}$- probability to the same limit, $R ( f )$, called the lifting of $f$. $R ( f )$ is defined ${\mathsf P}$- almost surely. Any $f \in {\mathcal L} ^ {0} ( H,m )$ is called an accessible random variable.

It is often desirable to put further restrictions on the function $f$. ${\mathcal L} ^ {1} ( H,m )$ is defined as an analogue of the usual $L ^ {1}$ space. Given two representations and corresponding liftings, say $( L _ {1} , {\mathsf P} _ {1} )$ with $R _ {1}$ and $( L _ {2} , {\mathsf P} _ {2} )$ with $R _ {2}$, and $f \in {\mathcal L} ^ {1} ( H,m )$, one has

$$\int\limits _ { H } f {dm } = \int\limits _ \Omega {R _ {1} f } {d {\mathsf P} _ {1} } = \int\limits _ \Omega {R _ {2} f } {d {\mathsf P} _ {2} } .$$

Thus, the integral of $f \in {\mathcal L} ^ {1} ( H,m )$ is independent of the representation and the lifting; $H,m$ and $f$ are the essential objects.

The situation just described is typical of much of the theory. The straightforward connection with observation is maintained through the role of $( H, {\mathcal C},m )$, but one also has the advantages of a countably additive probability space. It should be said, however, that the theory has its own technical difficulties and some frustrating open questions. For example, it is not always easy to tell whether a function is accessible or not, and it is unknown if ${\mathcal L} ^ {1} ( H,m )$ is complete (cf. Complete topological space).

For a detailed discussion of white noise theory via the canonical Gaussian measure and accessible random variables, and for applications of that theory to non-linear filtering, see [a6]. It contains many references to the earlier literature, including references to the seminal work of I.E. Segal and L. Gross. Some more recent papers making use of the theory are [a1], [a4], [a7].

#### References

 [a1] A. Budhiraja, G. Kallianpur, "Multiple Ogawa integrals, multiple Stratonovich integrals and the generalized Hu–Meyer formula" Techn. Report Dept. Stat. Univ. North Carolina , 442 (1994) [a2] T. Hida, H.H. Kuo, J. Potthoff, L. Streit, "White noise analysis" , World Sci. (1990) [a3] N. Ikeda, S. Watanabe, "Stochastic differential equations and diffusion processes" , North-Holland (1989) (Edition: Second) [a4] G.W. Johnson, G. Kallianpur, "Homogeneous chaos, -forms, scaling and the Feynman integral" Trans. Amer. Math. Soc. , 340 (1993) pp. 503–548 [a5] G. Kallianpur, "Stochastic filtering theory" , Springer (1980) [a6] G. Kallianpur, R.L. Karandikar, "White noise theory of prediction, filtering and smoothing" , Gordon&Breach (1988) [a7] G. Kallianpur, R.L. Karandikar, "Nonlinear transformations of the canonical Gauss measure on Hilbert space and absolute continuity" Acta Math. Appl. , 35 (1994) pp. 63–102 [a8] T. Hida, H.H. Kuo, J. Potthoff, L. Streit, "White noise. An infinite dimensional calculus" , Kluwer Acad. Publ. (1993)
How to Cite This Entry:
Accessible random variable. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Accessible_random_variable&oldid=45015
This article was adapted from an original article by G.W. Johnson (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article