Namespaces
Variants
Actions

Stochastic point process

From Encyclopedia of Mathematics
Jump to: navigation, search


point process

A stochastic process corresponding to a sequence of random variables $ \{ t _ {i} \} $, $ \dots < t _ {-} 1 < t _ {0} \leq 0 < t _ {1} < t _ {2} < \dots $, on the real line $ \mathbf R ^ {1} $. Each value $ t _ {i} $ corresponds to a random variable $ \Phi \{ t _ {i} \} = 1 , 2 \dots $ called its multiplicity. In queueing theory a stochastic point process is generated by the moments of arrivals for service, in biology by the moments of impulses in nerve fibres, etc.

The number $ C ( t) $ of all points $ t _ {i} \in [ 0 , t ] $ is called the counting process, $ C ( t) = M ( t) + A ( t) $, where $ M ( t) $ is a martingale and $ A ( t) $ is the compensator with respect to the $ \sigma $- fields $ {\mathcal F} _ {t} $ generated by the random points $ t _ {i} \in [ 0 , t ] $. Many important problems can be solved in terms of properties of the compensator $ A ( t) $.

Let $ X $ be a complete separable metric space, $ \mathfrak B _ {0} $ the class of bounded Borel sets $ B \subset X $, $ N = \{ \phi \} $ the set of all measures that take integral values, $ \phi ( B) = l < \infty $, and $ \mathfrak N $ the minimal $ \sigma $- field generated by the subsets of measures $ \{ \phi : {\phi ( B) = l } \} $ for $ B \in \mathfrak B _ {0} $ and $ l = 0 , 1 , 2 ,\dots $. Specifying a probability measure $ {\mathsf P} $ in the measurable space $ ( N , \mathfrak N ) $ determines a stochastic point process $ \Phi $ with state space $ X $ whose realizations are integer-valued measures on $ N $. The values $ x \in X $ for which $ \Phi \{ x \} > 0 $ are called the points of $ \Phi $. The quantity $ \Phi ( B) $ is equal to the sum of the multiplicities of the points of $ \Phi $ that lie in $ B $. $ \Phi $ is called simple if $ \Phi \{ x \} \leq 1 $ for all $ x \in X $ and ordinary if, for all $ B \in \mathfrak B _ {0} $ and $ \epsilon > 0 $, there is a partition $ \zeta = ( Z _ {1} \dots Z _ {n} ) $ of $ B $ such that

$$ \sum_{k=1}^ { n } {\mathsf P} \{ \Phi ( Z _ {k} ) > 1 \} < \epsilon . $$

Ordinary stochastic point processes are simple. An important role is played by the factorial moment measures

$$ \Lambda _ {k} ( B) = {\mathsf E} _ {p} \Phi ( B) [ \Phi ( B) - 1 ] \dots [ \Phi ( B) - k + 1 ] $$

and their extensions ( $ {\mathsf E} _ {p} $ is the mathematical expectation and $ \Lambda _ {1} ( B) $ is called the measure of intensity). If $ \Lambda _ {2n} ( B) < \infty $, then

$$ \sum_{k=0}^ { 2n-1} \frac{( - 1 ) ^ {k} }{k!} \Lambda _ {k} ( B) \leq {\mathsf P} \{ \Phi \{ B \} = 0 \} \leq \ \sum_{k=0} ^ { 2n } \frac{( - 1 ) ^ {k} }{k!} \Lambda _ {k} ( B) , $$

$$ \Lambda _ {0} ( B) = 1 . $$

A special role in the theory of stochastic point processes is played by Poisson stochastic point processes $ \Phi $, for which: a) the values of $ \Phi $ on disjoint $ B _ {i} \in \mathfrak B _ {0} $ are mutually-independent random variables (the property of absence of after-effect); and b)

$$ {\mathsf P} \{ \Phi ( B _ {i} ) = l \} = \ \frac{[ \Lambda _ {1} ( B) ] ^ {l} }{l!} \mathop{\rm exp} \{ - \Lambda _ {1} ( B) \} . $$

For a simple stochastic point process,

$$ \tag{* } \Lambda _ {1} ( B) = \inf \sum_{k=1}^ { n } {\mathsf P} \{ \Phi ( Z _ {k} ) > 0 \} , $$

where the infimum is taken over all partitions $ \zeta = \{ Z _ {1} \dots Z _ {m} \} $ of $ B $. The relation (*) makes it possible to find explicit expressions for the measure of intensity for many classes of stochastic point processes generated by stochastic processes or random fields.

A generalization of stochastic point processes are the so-called marked stochastic point processes, in which marks $ k ( x) $ from some measurable space $ [ K , \mathfrak N ] $ are assigned to points $ x $ with $ \Phi \{ x \} > 0 $. The service times in a queueing system can be regarded as marks.

In the theory of stochastic point processes, an important role is played by relations connecting, in a special way, given conditional probabilities of distinct events (Palm probabilities). Limit theorems have been obtained for superposition (summation), thinning out and other operations on sequences of stochastic point processes. Various generalizations of Poisson stochastic point processes are widely used in applications.

References

[1] A.Ya. Khinchin, "Mathematical methods in the theory of queueing" , Griffin (1960) (Translated from Russian)
[2] D.R. Cox, V. Isham, "Point processes" , Chapman & Hall (1980)
[3] J. Kerstan, K. Matthes, J. Mecke, "Infinitely divisible point processes" , Wiley (1978) (Translated from German)
[4] Yu.K. Belyaev, "Elements of the general theory of point processes" (Appendix to Russian translation of: H. Cramér, M. Leadbetter, Stationary and related stochastic processes, Wiley, 1967)
[5] R.S. Liptser, A.N. Shiryaev, "Statistics of random processes" , II. Applications , Springer (1978) (Translated from Russian)
[6] M. Jacobson, "Statistical analysis of counting processes" , Lect. notes in statistics , 12 , Springer (1982)

Comments

Let $ X $, $ \mathfrak B _ {0} $ be as above; let $ \mathfrak X \supset \mathfrak B _ {0} $ be the Borel field of $ X $. Let $ M $ be the collection of all Borel measures on $ ( X, \mathfrak X ) $. For each $ B \in \mathfrak B _ {0} $, $ \mu \mapsto \mu ( B) $ defines a mapping $ M \rightarrow \mathbf R _ \geq 0 $, and $ \mathfrak M $ is the $ \sigma $- field generated by those mappings, i.e. the smallest $ \sigma $- field making all these mappings measurable. The integral-valued elements of $ M $ form the subspace $ N $ and $ \mathfrak N $ is the induced $ \sigma $- field on $ N \subset M $.

A random measure on $ X $ is simply a probability measure on $ ( M, \mathfrak M ) $ or, equivalently, a measurable mapping $ \zeta $ of some abstract probability space $ ( \Omega , {\mathcal A} , {\mathsf P} ) $ into $ ( M, \mathfrak M ) $. A point process is the special case that $ \zeta $ takes its values in $ N $.

An element $ \nu \in N $ is simple if $ \nu \{ x \} = 0 $ or $ 1 $ for all $ x \in X $. A simple point process is one that takes its values in the subspace of $ N $ consisting of the simple measures.

Each $ B \in \mathfrak B _ {0} $ defines a function $ M \rightarrow \mathbf R _ \geq 0 $, $ \mu \mapsto \mu ( B) $, and, hence, gives a random measure $ \xi $, a random variable which will be denoted by $ \xi B $. One can think of a random measure in two ways: a collection of measures (on $ X $) $ \xi ( \omega ) $ parametrized by a probability space $ ( \Omega , {\mathcal A} , {\mathsf P} ) $ or a collection of random variables $ \xi B $( on $ \Omega $ or on $ M $) indexed by $ \mathfrak B _ {0} $, depending on which part of the mapping $ ( \omega , B ) \mapsto \xi ( \omega )( B ) $ one focuses on.

More generally, for each bounded continuous function $ f $ on $ X $ one has the random variable $ \xi f $ defined by

$$ \xi f ( \mu ) = \int\limits _ { X } f( x) \mu ( dx) . $$

For each random measure $ \xi $ one defines the Palm distributions of $ \xi $. For a simple point process $ \xi $ the Palm distribution $ Q _ {x} $ can be thought of as the conditional distribution of $ \xi $ given that $ \xi $ has an atom at $ x \in X $. Palm distributions are of great importance in random measure theory and have applications to queueing theory, branching processes, regenerative sets, stochastic geometry, statistical mechanics, and insurance mathematics (the last, via doubly stochastic Poisson processes, also called Cox processes, which are Poisson processes with stochastic variation in the intensity).

The Palm distribution of a random measure is obtained by disintegrating its Campbell measure on $ X \times M $, which is given by

$$ C( B \times {\mathcal M} ) = \ {\mathsf E} [( \xi B ) 1 _ {\mathcal M} ] $$

for $ B \in \mathfrak B _ {0} $, $ {\mathcal M} \in \mathfrak M $, where $ 1 _ {\mathcal M} $ is the indicator function of $ {\mathcal M} \subset M $, the function $ ( \xi B ) 1 _ {\mathcal M} $ is the (pointwise) product of the two function $ \xi B $ and $ 1 _ {\mathcal M} : M \rightarrow \mathbf R $ and $ {\mathsf E} $ stands for expectation.

Disintegration of a measure is much related to conditional distributions (cf. Conditional distribution). Given two measurable spaces $ ( X, \mathfrak X ) $ and $ ( T, {\mathcal T} ) $, a kernel, also called a Markov kernel, from $ X $ to $ T $ is a mapping $ \rho : X \times {\mathcal T} \rightarrow \mathbf R _ \geq 0 $ such that $ \rho ( \cdot , A ) : x \mapsto \rho ( x, A ) $ is measurable on $ X $ for all $ A \in {\mathcal T} $ and such that $ \rho _ {x} = \rho ( x, \cdot ): A \mapsto \rho ( x, A ) $ is a $ \sigma $- finite measure on $ ( T, {\mathcal T} ) $ for all $ x \in X $.

Given a $ \sigma $- finite measure $ \mu $ on the product space $ X \times T $, a disintegration of $ \mu $ consists of a $ \sigma $- finite measure $ \nu $ on $ X $ and a kernel $ \rho $ from $ X $ to $ T $ such that $ \rho _ {x} ( T ) \neq 0 $ $ \nu $- almost everywhere and such that for all $ ( B, A) \in \mathfrak X \times {\mathcal T} $,

$$ \tag{a1 } \mu ( B \times A ) = \ \int\limits _ { B } \rho _ {x} ( A) \nu ( dx) . $$

It follows that for every measurable function $ f: X \times T \rightarrow \mathbf R _ \geq 0 $,

$$ \tag{a2 } \int\limits \int\limits f( x, t) \mu ( dx dt) = \ \int\limits \nu ( dx) \int\limits f( x, t) \rho _ {x} ( dt) . $$

The inverse operation is called mixing. Given $ \nu $ and $ \rho $, the measure (a1) is called the mixture of the $ \rho _ {x} $ with respect to $ \nu $( and (a2) could be called the Fubini formula for mixture measures).

A disintegration exists for a $ \sigma $- finite $ \mu $ if $ ( T, {\mathcal T} ) $ is Polish Borel. This reduces to a matter of conditional distributions. The measure $ \nu $ is unique up to equivalence, and $ \rho $ is unique up to a measurable renormalization $ \nu $- almost everywhere. More generally one studies disintegration (or decomposition into slices) of a measure $ \mu $ on a space $ Y $ relative to any mapping $ \pi : Y \rightarrow X $( instead of the projection $ Y = X \times T \rightarrow X $, cf. [a11], [a12]).

For each bounded continuous function $ f $, let $ {\mathsf E} ( \xi f ) $ be the expectation of the random variable $ \xi f $ and let $ {\mathsf E} \xi $ be the measure $ {\mathsf E} \xi ( B) = {\mathsf E} ( \xi B ) $ on $ X $. Then, using (a2), the disintegration of the Campbell measure $ C $ on $ X \times M $ yields the measure $ {\mathsf E} \xi $ on $ X $ and, if $ {\mathsf E} \xi $ is $ \sigma $- finite, the $ \rho _ {x} $ can be normalized $ {\mathsf E} \xi $- almost everywhere to probability measures $ Q _ {x} $ on $ M $ to give

$$ {\mathsf E} ( \xi f 1 _ {\mathcal M} ) = \int\limits Q _ {x} ( {\mathcal M} ) f( x) {\mathsf E} \xi ( dx) . $$

The $ Q _ {x} $ are the Palm distributions (Palm probabilities) of $ \xi $. Equivalently, as a function of $ x $, $ Q _ {x} ( {\mathcal M} ) $ for $ {\mathcal M} \in \mathfrak M $ is $ {\mathsf E} \xi $- almost everywhere the Radon–Nikodým derivative (cf. Radon–Nikodým theorem) of the measure $ {\mathsf E} ( 1 _ {\mathcal M} ( \xi ) \xi ) $ on $ X $ with respect to $ {\mathsf E} \xi $. Here $ 1 _ {\mathcal M} ( \xi ) \xi $ is the random measure $ \Omega \rightarrow M $,

$$ ( 1 _ {\mathcal M} ( \xi ) \xi )( \omega ) = \ \left \{ \begin{array}{ll} 0 & \textrm{ if } \xi ( \omega ) \notin {\mathcal M} , \\ \xi ( \omega ) & \textrm{ if } \xi ( \omega ) \in {\mathcal M} , \\ \end{array} \right .$$

i.e. the trace of $ \xi $ on $ {\mathcal M} $.

References

[a1] A.A. Borovkov, "Stochastic processes in queueing theory" , Springer (1976) (Translated from Russian)
[a2] P.A.W. Lewis (ed.) , Stochastic point processes: statistical analysis theory and applications , Wiley (Interscience) (1972)
[a3] V.K. Murthy, "The general point process" , Addison-Wesley (1974)
[a4] D.C. Snyder, "Random point processes" , Wiley (1975)
[a5] D.J. Daley, D. Vere-Jones, "An introduction to the theory of point processes" , Springer (1978)
[a6] F. Baccelli, P. Brémaud, "Palm probabilities and stationary queues" , Lect. notes in statistics , 41 , Springer (1987)
[a7] P. Brémaud, "Point processes and queues - Martingale dynamics" , Springer (1981)
[a8] J. Neveu, "Processus ponctuels" J. Hoffmann-Jørgensen (ed.) T.M. Liggett (ed.) J. Neveu (ed.) , Ecole d'été de St. Flour VI 1976 , Lect. notes in math. , 598 , Springer (1977) pp. 250–448
[a9] O. Kallenberg, "Random measures" , Akademie Verlag & Acad. Press (1986)
[a10] J. Grandell, "Doubly stochastic Poisson processes" , Springer (1976)
[a11] H. Bauer, "Probability theory and elements of measure theory" , Holt, Rinehart & Winston (1972) (Translated from German)
[a12] N. Bourbaki, "Intégration" , Eléments de mathématiques , Hermann (1967) pp. Chapt. 5: Intégration des mesures, §6.6
[a13] N. Bourbaki, "Intégration" , Eléments de mathématiques , Hermann (1959) pp. Chapt. 6: Intégration vectorielle, §3
How to Cite This Entry:
Stochastic point process. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stochastic_point_process&oldid=54988
This article was adapted from an original article by Yu.K. Belyaev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article