Namespaces
Variants
Actions

Difference between revisions of "Stochastic point process"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
s0901701.png
 +
$#A+1 = 196 n = 0
 +
$#C+1 = 196 : ~/encyclopedia/old_files/data/S090/S.0900170 Stochastic point process,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''point process''
 
''point process''
  
A [[Stochastic process|stochastic process]] corresponding to a sequence of random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s0901701.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s0901702.png" />, on the real line <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s0901703.png" />. Each value <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s0901704.png" /> corresponds to a random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s0901705.png" /> called its multiplicity. In [[Queueing theory|queueing theory]] a stochastic point process is generated by the moments of arrivals for service, in biology by the moments of impulses in nerve fibres, etc.
+
A [[Stochastic process|stochastic process]] corresponding to a sequence of random variables $  \{ t _ {i} \} $,
 +
$  \dots < t _ {-} 1 < t _ {0} \leq  0 < t _ {1} < t _ {2} < \dots $,  
 +
on the real line $  \mathbf R  ^ {1} $.  
 +
Each value $  t _ {i} $
 +
corresponds to a random variable $  \Phi \{ t _ {i} \} = 1 , 2 \dots $
 +
called its multiplicity. In [[Queueing theory|queueing theory]] a stochastic point process is generated by the moments of arrivals for service, in biology by the moments of impulses in nerve fibres, etc.
  
The number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s0901706.png" /> of all points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s0901707.png" /> is called the counting process, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s0901708.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s0901709.png" /> is a [[Martingale|martingale]] and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017010.png" /> is the compensator with respect to the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017011.png" />-fields <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017012.png" /> generated by the random points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017013.png" />. Many important problems can be solved in terms of properties of the compensator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017014.png" />.
+
The number $  C ( t) $
 +
of all points $  t _ {i} \in [ 0 , t ] $
 +
is called the counting process, $  C ( t) = M ( t) + A ( t) $,  
 +
where $  M ( t) $
 +
is a [[Martingale|martingale]] and $  A ( t) $
 +
is the compensator with respect to the $  \sigma $-
 +
fields $  {\mathcal F} _ {t} $
 +
generated by the random points $  t _ {i} \in [ 0 , t ] $.  
 +
Many important problems can be solved in terms of properties of the compensator $  A ( t) $.
  
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017015.png" /> be a complete separable metric space, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017016.png" /> the class of bounded Borel sets <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017017.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017018.png" /> the set of all measures that take integral values, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017019.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017020.png" /> the minimal <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017021.png" />-field generated by the subsets of measures <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017022.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017023.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017024.png" />. Specifying a probability measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017025.png" /> in the measurable space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017026.png" /> determines a stochastic point process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017027.png" /> with state space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017028.png" /> whose realizations are integer-valued measures on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017029.png" />. The values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017030.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017031.png" /> are called the points of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017032.png" />. The quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017033.png" /> is equal to the sum of the multiplicities of the points of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017034.png" /> that lie in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017035.png" />. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017036.png" /> is called simple if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017037.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017038.png" /> and ordinary if, for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017039.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017040.png" />, there is a partition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017041.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017042.png" /> such that
+
Let $  X $
 +
be a complete separable metric space, $  \mathfrak B _ {0} $
 +
the class of bounded Borel sets $  B \subset  X $,  
 +
$  N = \{ \phi \} $
 +
the set of all measures that take integral values, $  \phi ( B) = l < \infty $,  
 +
and $  \mathfrak N $
 +
the minimal $  \sigma $-
 +
field generated by the subsets of measures $  \{  \phi  : {\phi ( B) = l } \} $
 +
for $  B \in \mathfrak B _ {0} $
 +
and $  l = 0 , 1 , 2 ,\dots $.  
 +
Specifying a probability measure $  {\mathsf P} $
 +
in the measurable space $  ( N , \mathfrak N ) $
 +
determines a stochastic point process $  \Phi $
 +
with state space $  X $
 +
whose realizations are integer-valued measures on $  N $.  
 +
The values $  x \in X $
 +
for which $  \Phi \{ x \} > 0 $
 +
are called the points of $  \Phi $.  
 +
The quantity $  \Phi ( B) $
 +
is equal to the sum of the multiplicities of the points of $  \Phi $
 +
that lie in $  B $.  
 +
$  \Phi $
 +
is called simple if $  \Phi \{ x \} \leq  1 $
 +
for all $  x \in X $
 +
and ordinary if, for all $  B \in \mathfrak B _ {0} $
 +
and  $  \epsilon > 0 $,  
 +
there is a partition $  \zeta = ( Z _ {1} \dots Z _ {n} ) $
 +
of $  B $
 +
such that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017043.png" /></td> </tr></table>
+
$$
 +
\sum _ { k= } 1 ^ { n }
 +
{\mathsf P} \{ \Phi ( Z _ {k} ) > 1 \}  < \epsilon .
 +
$$
  
 
Ordinary stochastic point processes are simple. An important role is played by the factorial moment measures
 
Ordinary stochastic point processes are simple. An important role is played by the factorial moment measures
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017044.png" /></td> </tr></table>
+
$$
 +
\Lambda _ {k} ( B)  = {\mathsf E} _ {p} \Phi ( B) [ \Phi ( B) - 1 ] \dots [ \Phi ( B) - k + 1 ]
 +
$$
  
and their extensions (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017045.png" /> is the mathematical expectation and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017046.png" /> is called the measure of intensity). If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017047.png" />, then
+
and their extensions ( $  {\mathsf E} _ {p} $
 +
is the mathematical expectation and $  \Lambda _ {1} ( B) $
 +
is called the measure of intensity). If $  \Lambda _ {2n} ( B) < \infty $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017048.png" /></td> </tr></table>
+
$$
 +
\sum _ { k= } 0 ^ { 2n- }  1
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017049.png" /></td> </tr></table>
+
\frac{( - 1 )  ^ {k} }{k!}
  
A special role in the theory of stochastic point processes is played by Poisson stochastic point processes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017050.png" />, for which: a) the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017051.png" /> on disjoint <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017052.png" /> are mutually-independent random variables (the property of absence of after-effect); and b)
+
\Lambda _ {k} ( B) \leq  {\mathsf P}
 +
\{ \Phi \{ B \} = 0 \}  \leq  \
 +
\sum _ { k= } 0 ^ { 2n }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017053.png" /></td> </tr></table>
+
\frac{( - 1 )  ^ {k} }{k!}
 +
 
 +
\Lambda _ {k} ( B) ,
 +
$$
 +
 
 +
$$
 +
\Lambda _ {0} ( B)  = 1 .
 +
$$
 +
 
 +
A special role in the theory of stochastic point processes is played by Poisson stochastic point processes  $  \Phi $,
 +
for which: a) the values of  $  \Phi $
 +
on disjoint  $  B _ {i} \in \mathfrak B _ {0} $
 +
are mutually-independent random variables (the property of absence of after-effect); and b)
 +
 
 +
$$
 +
{\mathsf P} \{ \Phi ( B _ {i} ) = l \}  = \
 +
 
 +
\frac{[ \Lambda _ {1} ( B) ]  ^ {l} }{l!}
 +
  \mathop{\rm exp}
 +
\{ - \Lambda _ {1} ( B) \} .
 +
$$
  
 
For a simple stochastic point process,
 
For a simple stochastic point process,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017054.png" /></td> <td valign="top" style="width:5%;text-align:right;">(*)</td></tr></table>
+
$$ \tag{* }
 +
\Lambda _ {1} ( B)  = \inf \
 +
\sum _ { k= } 1 ^ { n }  {\mathsf P} \{ \Phi ( Z _ {k} ) > 0 \} ,
 +
$$
  
where the infimum is taken over all partitions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017055.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017056.png" />. The relation (*) makes it possible to find explicit expressions for the measure of intensity for many classes of stochastic point processes generated by stochastic processes or random fields.
+
where the infimum is taken over all partitions $  \zeta = \{ Z _ {1} \dots Z _ {m} \} $
 +
of $  B $.  
 +
The relation (*) makes it possible to find explicit expressions for the measure of intensity for many classes of stochastic point processes generated by stochastic processes or random fields.
  
A generalization of stochastic point processes are the so-called marked stochastic point processes, in which marks <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017057.png" /> from some measurable space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017058.png" /> are assigned to points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017059.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017060.png" />. The service times in a queueing system can be regarded as marks.
+
A generalization of stochastic point processes are the so-called marked stochastic point processes, in which marks $  k ( x) $
 +
from some measurable space $  [ K , \mathfrak N ] $
 +
are assigned to points $  x $
 +
with  $  \Phi \{ x \} > 0 $.  
 +
The service times in a queueing system can be regarded as marks.
  
 
In the theory of stochastic point processes, an important role is played by relations connecting, in a special way, given conditional probabilities of distinct events (Palm probabilities). Limit theorems have been obtained for superposition (summation), thinning out and other operations on sequences of stochastic point processes. Various generalizations of Poisson stochastic point processes are widely used in applications.
 
In the theory of stochastic point processes, an important role is played by relations connecting, in a special way, given conditional probabilities of distinct events (Palm probabilities). Limit theorems have been obtained for superposition (summation), thinning out and other operations on sequences of stochastic point processes. Various generalizations of Poisson stochastic point processes are widely used in applications.
Line 35: Line 128:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.Ya. Khinchin,  "Mathematical methods in the theory of queueing" , Griffin  (1960)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D.R. Cox,  V. Isham,  "Point processes" , Chapman &amp; Hall  (1980)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J. Kerstan,  K. Matthes,  J. Mecke,  "Infinitely divisible point processes" , Wiley  (1978)  (Translated from German)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  Yu.K. Belyaev,  "Elements of the general theory of point processes"  (Appendix to Russian translation of: H. Cramér, M. Leadbetter, Stationary and related stochastic processes, Wiley, 1967)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  R.S. Liptser,  A.N. Shiryaev,  "Statistics of random processes" , '''II. Applications''' , Springer  (1978)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  M. Jacobson,  "Statistical analysis of counting processes" , ''Lect. notes in statistics'' , '''12''' , Springer  (1982)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.Ya. Khinchin,  "Mathematical methods in the theory of queueing" , Griffin  (1960)  (Translated from Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  D.R. Cox,  V. Isham,  "Point processes" , Chapman &amp; Hall  (1980)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  J. Kerstan,  K. Matthes,  J. Mecke,  "Infinitely divisible point processes" , Wiley  (1978)  (Translated from German)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  Yu.K. Belyaev,  "Elements of the general theory of point processes"  (Appendix to Russian translation of: H. Cramér, M. Leadbetter, Stationary and related stochastic processes, Wiley, 1967)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  R.S. Liptser,  A.N. Shiryaev,  "Statistics of random processes" , '''II. Applications''' , Springer  (1978)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  M. Jacobson,  "Statistical analysis of counting processes" , ''Lect. notes in statistics'' , '''12''' , Springer  (1982)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====
Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017061.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017062.png" /> be as above; let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017063.png" /> be the Borel field of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017064.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017065.png" /> be the collection of all Borel measures on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017066.png" />. For each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017067.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017068.png" /> defines a mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017069.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017070.png" /> is the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017071.png" />-field generated by those mappings, i.e. the smallest <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017072.png" />-field making all these mappings measurable. The integral-valued elements of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017073.png" /> form the subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017074.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017075.png" /> is the induced <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017076.png" />-field on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017077.png" />.
+
Let $  X $,  
 +
$  \mathfrak B _ {0} $
 +
be as above; let $  \mathfrak X \supset \mathfrak B _ {0} $
 +
be the Borel field of $  X $.  
 +
Let $  M $
 +
be the collection of all Borel measures on $  ( X, \mathfrak X ) $.  
 +
For each $  B \in \mathfrak B _ {0} $,  
 +
$  \mu \mapsto \mu ( B) $
 +
defines a mapping $  M \rightarrow \mathbf R _  \geq  0 $,  
 +
and $  \mathfrak M $
 +
is the $  \sigma $-
 +
field generated by those mappings, i.e. the smallest $  \sigma $-
 +
field making all these mappings measurable. The integral-valued elements of $  M $
 +
form the subspace $  N $
 +
and $  \mathfrak N $
 +
is the induced $  \sigma $-
 +
field on $  N \subset  M $.
  
A random measure on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017078.png" /> is simply a probability measure on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017079.png" /> or, equivalently, a measurable mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017080.png" /> of some abstract probability space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017081.png" /> into <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017082.png" />. A point process is the special case that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017083.png" /> takes its values in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017084.png" />.
+
A random measure on $  X $
 +
is simply a probability measure on $  ( M, \mathfrak M ) $
 +
or, equivalently, a measurable mapping $  \zeta $
 +
of some abstract probability space $  ( \Omega , {\mathcal A} , {\mathsf P} ) $
 +
into $  ( M, \mathfrak M ) $.  
 +
A point process is the special case that $  \zeta $
 +
takes its values in $  N $.
  
An element <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017085.png" /> is simple if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017086.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017087.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017088.png" />. A simple point process is one that takes its values in the subspace of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017089.png" /> consisting of the simple measures.
+
An element $  \nu \in N $
 +
is simple if $  \nu \{ x \} = 0 $
 +
or $  1 $
 +
for all $  x \in X $.  
 +
A simple point process is one that takes its values in the subspace of $  N $
 +
consisting of the simple measures.
  
Each <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017090.png" /> defines a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017091.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017092.png" />, and, hence, gives a random measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017093.png" />, a random variable which will be denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017094.png" />. One can think of a random measure in two ways: a collection of measures (on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017095.png" />) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017096.png" /> parametrized by a probability space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017097.png" /> or a collection of random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017098.png" /> (on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s09017099.png" /> or on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170100.png" />) indexed by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170101.png" />, depending on which part of the mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170102.png" /> one focuses on.
+
Each $  B \in \mathfrak B _ {0} $
 +
defines a function $  M \rightarrow \mathbf R _  \geq  0 $,  
 +
$  \mu \mapsto \mu ( B) $,  
 +
and, hence, gives a random measure $  \xi $,  
 +
a random variable which will be denoted by $  \xi B $.  
 +
One can think of a random measure in two ways: a collection of measures (on $  X $)
 +
$  \xi ( \omega ) $
 +
parametrized by a probability space $  ( \Omega , {\mathcal A} , {\mathsf P} ) $
 +
or a collection of random variables $  \xi B $(
 +
on $  \Omega $
 +
or on $  M $)  
 +
indexed by $  \mathfrak B _ {0} $,  
 +
depending on which part of the mapping $  ( \omega , B ) \mapsto \xi ( \omega )( B ) $
 +
one focuses on.
  
More generally, for each bounded continuous function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170103.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170104.png" /> one has the random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170105.png" /> defined by
+
More generally, for each bounded continuous function $  f $
 +
on $  X $
 +
one has the random variable $  \xi f $
 +
defined by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170106.png" /></td> </tr></table>
+
$$
 +
\xi f ( \mu )  = \int\limits _ { X } f( x) \mu ( dx) .
 +
$$
  
For each random measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170107.png" /> one defines the Palm distributions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170108.png" />. For a simple point process <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170109.png" /> the Palm distribution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170110.png" /> can be thought of as the conditional distribution of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170111.png" /> given that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170112.png" /> has an atom at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170113.png" />. Palm distributions are of great importance in random measure theory and have applications to queueing theory, branching processes, regenerative sets, stochastic geometry, statistical mechanics, and insurance mathematics (the last, via doubly stochastic Poisson processes, also called Cox processes, which are Poisson processes with stochastic variation in the intensity).
+
For each random measure $  \xi $
 +
one defines the Palm distributions of $  \xi $.  
 +
For a simple point process $  \xi $
 +
the Palm distribution $  Q _ {x} $
 +
can be thought of as the conditional distribution of $  \xi $
 +
given that $  \xi $
 +
has an atom at $  x \in X $.  
 +
Palm distributions are of great importance in random measure theory and have applications to queueing theory, branching processes, regenerative sets, stochastic geometry, statistical mechanics, and insurance mathematics (the last, via doubly stochastic Poisson processes, also called Cox processes, which are Poisson processes with stochastic variation in the intensity).
  
The Palm distribution of a random measure is obtained by disintegrating its Campbell measure on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170114.png" />, which is given by
+
The Palm distribution of a random measure is obtained by disintegrating its Campbell measure on $  X \times M $,  
 +
which is given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170115.png" /></td> </tr></table>
+
$$
 +
C( B \times {\mathcal M} )  = \
 +
{\mathsf E} [( \xi B ) 1 _  {\mathcal M}  ]
 +
$$
  
for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170116.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170117.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170118.png" /> is the indicator function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170119.png" />, the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170120.png" /> is the (pointwise) product of the two function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170121.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170122.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170123.png" /> stands for expectation.
+
for $  B \in \mathfrak B _ {0} $,  
 +
$  {\mathcal M} \in \mathfrak M $,  
 +
where $  1 _  {\mathcal M}  $
 +
is the indicator function of $  {\mathcal M} \subset  M $,  
 +
the function $  ( \xi B ) 1 _  {\mathcal M}  $
 +
is the (pointwise) product of the two function $  \xi B $
 +
and $  1 _  {\mathcal M}  : M \rightarrow \mathbf R $
 +
and $  {\mathsf E} $
 +
stands for expectation.
  
Disintegration of a measure is much related to conditional distributions (cf. [[Conditional distribution|Conditional distribution]]). Given two measurable spaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170124.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170125.png" />, a kernel, also called a Markov kernel, from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170126.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170127.png" /> is a mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170128.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170129.png" /> is measurable on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170130.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170131.png" /> and such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170132.png" /> is a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170133.png" />-finite measure on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170134.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170135.png" />.
+
Disintegration of a measure is much related to conditional distributions (cf. [[Conditional distribution|Conditional distribution]]). Given two measurable spaces $  ( X, \mathfrak X ) $
 +
and $  ( T, {\mathcal T} ) $,  
 +
a kernel, also called a Markov kernel, from $  X $
 +
to $  T $
 +
is a mapping $  \rho :  X \times {\mathcal T} \rightarrow \mathbf R _  \geq  0 $
 +
such that $  \rho ( \cdot , A ) : x \mapsto \rho ( x, A ) $
 +
is measurable on $  X $
 +
for all $  A \in {\mathcal T} $
 +
and such that $  \rho _ {x} = \rho ( x, \cdot ): A \mapsto \rho ( x, A ) $
 +
is a $  \sigma $-
 +
finite measure on $  ( T, {\mathcal T} ) $
 +
for all $  x \in X $.
  
Given a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170136.png" />-finite measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170137.png" /> on the product space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170138.png" />, a disintegration of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170139.png" /> consists of a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170140.png" />-finite measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170141.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170142.png" /> and a kernel <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170143.png" /> from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170144.png" /> to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170145.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170146.png" /> <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170147.png" />-almost everywhere and such that for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170148.png" />,
+
Given a $  \sigma $-
 +
finite measure $  \mu $
 +
on the product space $  X \times T $,  
 +
a disintegration of $  \mu $
 +
consists of a $  \sigma $-
 +
finite measure $  \nu $
 +
on $  X $
 +
and a kernel $  \rho $
 +
from $  X $
 +
to $  T $
 +
such that $  \rho _ {x} ( T ) \neq 0 $
 +
$  \nu $-
 +
almost everywhere and such that for all $  ( B, A) \in \mathfrak X \times {\mathcal T} $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170149.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
$$ \tag{a1 }
 +
\mu ( B \times A )  = \
 +
\int\limits _ { B } \rho _ {x} ( A) \nu ( dx) .
 +
$$
  
It follows that for every measurable function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170150.png" />,
+
It follows that for every measurable function $  f:  X \times T \rightarrow \mathbf R _  \geq  0 $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170151.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a2)</td></tr></table>
+
$$ \tag{a2 }
 +
\int\limits \int\limits f( x, t) \mu ( dx  dt)  = \
 +
\int\limits \nu ( dx) \int\limits f( x, t) \rho _ {x} ( dt) .
 +
$$
  
The inverse operation is called mixing. Given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170152.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170153.png" />, the measure (a1) is called the mixture of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170154.png" /> with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170155.png" /> (and (a2) could be called the Fubini formula for mixture measures).
+
The inverse operation is called mixing. Given $  \nu $
 +
and $  \rho $,  
 +
the measure (a1) is called the mixture of the $  \rho _ {x} $
 +
with respect to $  \nu $(
 +
and (a2) could be called the Fubini formula for mixture measures).
  
A disintegration exists for a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170156.png" />-finite <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170157.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170158.png" /> is Polish Borel. This reduces to a matter of conditional distributions. The measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170159.png" /> is unique up to equivalence, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170160.png" /> is unique up to a measurable renormalization <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170161.png" />-almost everywhere. More generally one studies disintegration (or decomposition into slices) of a measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170162.png" /> on a space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170163.png" /> relative to any mapping <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170164.png" /> (instead of the projection <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170165.png" />, cf. [[#References|[a11]]], [[#References|[a12]]]).
+
A disintegration exists for a $  \sigma $-
 +
finite $  \mu $
 +
if $  ( T, {\mathcal T} ) $
 +
is Polish Borel. This reduces to a matter of conditional distributions. The measure $  \nu $
 +
is unique up to equivalence, and $  \rho $
 +
is unique up to a measurable renormalization $  \nu $-
 +
almost everywhere. More generally one studies disintegration (or decomposition into slices) of a measure $  \mu $
 +
on a space $  Y $
 +
relative to any mapping $  \pi : Y \rightarrow X $(
 +
instead of the projection $  Y = X \times T \rightarrow X $,  
 +
cf. [[#References|[a11]]], [[#References|[a12]]]).
  
For each bounded continuous function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170166.png" />, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170167.png" /> be the expectation of the random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170168.png" /> and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170169.png" /> be the measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170170.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170171.png" />. Then, using (a2), the disintegration of the Campbell measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170172.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170173.png" /> yields the measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170174.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170175.png" /> and, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170176.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170177.png" />-finite, the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170178.png" /> can be normalized <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170179.png" />-almost everywhere to probability measures <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170180.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170181.png" /> to give
+
For each bounded continuous function $  f $,  
 +
let $  {\mathsf E} ( \xi f  ) $
 +
be the expectation of the random variable $  \xi f $
 +
and let $  {\mathsf E} \xi $
 +
be the measure $  {\mathsf E} \xi ( B) = {\mathsf E} ( \xi B ) $
 +
on $  X $.  
 +
Then, using (a2), the disintegration of the Campbell measure $  C $
 +
on $  X \times M $
 +
yields the measure $  {\mathsf E} \xi $
 +
on $  X $
 +
and, if $  {\mathsf E} \xi $
 +
is $  \sigma $-
 +
finite, the $  \rho _ {x} $
 +
can be normalized $  {\mathsf E} \xi $-
 +
almost everywhere to probability measures $  Q _ {x} $
 +
on $  M $
 +
to give
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170182.png" /></td> </tr></table>
+
$$
 +
{\mathsf E} ( \xi f 1 _  {\mathcal M}  )  = \int\limits Q _ {x} ( {\mathcal M} )
 +
f( x) {\mathsf E} \xi ( dx) .
 +
$$
  
The <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170183.png" /> are the Palm distributions (Palm probabilities) of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170184.png" />. Equivalently, as a function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170185.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170186.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170187.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170188.png" />-almost everywhere the Radon–Nikodým derivative (cf. [[Radon–Nikodým theorem|Radon–Nikodým theorem]]) of the measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170189.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170190.png" /> with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170191.png" />. Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170192.png" /> is the random measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170193.png" />,
+
The $  Q _ {x} $
 +
are the Palm distributions (Palm probabilities) of $  \xi $.  
 +
Equivalently, as a function of $  x $,  
 +
$  Q _ {x} ( {\mathcal M} ) $
 +
for $  {\mathcal M} \in \mathfrak M $
 +
is $  {\mathsf E} \xi $-
 +
almost everywhere the Radon–Nikodým derivative (cf. [[Radon–Nikodým theorem|Radon–Nikodým theorem]]) of the measure $  {\mathsf E} ( 1 _  {\mathcal M}  ( \xi ) \xi ) $
 +
on $  X $
 +
with respect to $  {\mathsf E} \xi $.  
 +
Here $  1 _  {\mathcal M}  ( \xi ) \xi $
 +
is the random measure $  \Omega \rightarrow M $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170194.png" /></td> </tr></table>
+
$$
 +
( 1 _  {\mathcal M}  ( \xi ) \xi )( \omega )  = \
 +
\left \{
  
i.e. the trace of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170195.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s090/s090170/s090170196.png" />.
+
i.e. the trace of $  \xi $
 +
on $  {\mathcal M} $.
  
 
====References====
 
====References====
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  A.A. Borovkov,  "Stochastic processes in queueing theory" , Springer  (1976)  (Translated from Russian)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  P.A.W. Lewis (ed.) , ''Stochastic point processes: statistical analysis theory and applications'' , Wiley (Interscience)  (1972)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  V.K. Murthy,  "The general point process" , Addison-Wesley  (1974)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  D.C. Snyder,  "Random point processes" , Wiley  (1975)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  D.J. Daley,  D. Vere-Jones,  "An introduction to the theory of point processes" , Springer  (1978)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  F. Baccelli,  P. Brémaud,  "Palm probabilities and stationary queues" , ''Lect. notes in statistics'' , '''41''' , Springer  (1987)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  P. Brémaud,  "Point processes and queues - Martingale dynamics" , Springer  (1981)</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  J. Neveu,  "Processus ponctuels"  J. Hoffmann-Jørgensen (ed.)  T.M. Liggett (ed.)  J. Neveu (ed.) , ''Ecole d'été de St. Flour VI 1976'' , ''Lect. notes in math.'' , '''598''' , Springer  (1977)  pp. 250–448</TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top">  O. Kallenberg,  "Random measures" , Akademie Verlag &amp; Acad. Press  (1986)</TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top">  J. Grandell,  "Doubly stochastic Poisson processes" , Springer  (1976)</TD></TR><TR><TD valign="top">[a11]</TD> <TD valign="top">  H. Bauer,  "Probability theory and elements of measure theory" , Holt, Rinehart &amp; Winston  (1972)  (Translated from German)</TD></TR><TR><TD valign="top">[a12]</TD> <TD valign="top">  N. Bourbaki,  "Intégration" , ''Eléments de mathématiques'' , Hermann  (1967)  pp. Chapt. 5: Intégration des mesures, §6.6</TD></TR><TR><TD valign="top">[a13]</TD> <TD valign="top">  N. Bourbaki,  "Intégration" , ''Eléments de mathématiques'' , Hermann  (1959)  pp. Chapt. 6: Intégration vectorielle, §3</TD></TR></table>
 
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  A.A. Borovkov,  "Stochastic processes in queueing theory" , Springer  (1976)  (Translated from Russian)</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  P.A.W. Lewis (ed.) , ''Stochastic point processes: statistical analysis theory and applications'' , Wiley (Interscience)  (1972)</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  V.K. Murthy,  "The general point process" , Addison-Wesley  (1974)</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  D.C. Snyder,  "Random point processes" , Wiley  (1975)</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  D.J. Daley,  D. Vere-Jones,  "An introduction to the theory of point processes" , Springer  (1978)</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  F. Baccelli,  P. Brémaud,  "Palm probabilities and stationary queues" , ''Lect. notes in statistics'' , '''41''' , Springer  (1987)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  P. Brémaud,  "Point processes and queues - Martingale dynamics" , Springer  (1981)</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  J. Neveu,  "Processus ponctuels"  J. Hoffmann-Jørgensen (ed.)  T.M. Liggett (ed.)  J. Neveu (ed.) , ''Ecole d'été de St. Flour VI 1976'' , ''Lect. notes in math.'' , '''598''' , Springer  (1977)  pp. 250–448</TD></TR><TR><TD valign="top">[a9]</TD> <TD valign="top">  O. Kallenberg,  "Random measures" , Akademie Verlag &amp; Acad. Press  (1986)</TD></TR><TR><TD valign="top">[a10]</TD> <TD valign="top">  J. Grandell,  "Doubly stochastic Poisson processes" , Springer  (1976)</TD></TR><TR><TD valign="top">[a11]</TD> <TD valign="top">  H. Bauer,  "Probability theory and elements of measure theory" , Holt, Rinehart &amp; Winston  (1972)  (Translated from German)</TD></TR><TR><TD valign="top">[a12]</TD> <TD valign="top">  N. Bourbaki,  "Intégration" , ''Eléments de mathématiques'' , Hermann  (1967)  pp. Chapt. 5: Intégration des mesures, §6.6</TD></TR><TR><TD valign="top">[a13]</TD> <TD valign="top">  N. Bourbaki,  "Intégration" , ''Eléments de mathématiques'' , Hermann  (1959)  pp. Chapt. 6: Intégration vectorielle, §3</TD></TR></table>

Revision as of 08:23, 6 June 2020


point process

A stochastic process corresponding to a sequence of random variables $ \{ t _ {i} \} $, $ \dots < t _ {-} 1 < t _ {0} \leq 0 < t _ {1} < t _ {2} < \dots $, on the real line $ \mathbf R ^ {1} $. Each value $ t _ {i} $ corresponds to a random variable $ \Phi \{ t _ {i} \} = 1 , 2 \dots $ called its multiplicity. In queueing theory a stochastic point process is generated by the moments of arrivals for service, in biology by the moments of impulses in nerve fibres, etc.

The number $ C ( t) $ of all points $ t _ {i} \in [ 0 , t ] $ is called the counting process, $ C ( t) = M ( t) + A ( t) $, where $ M ( t) $ is a martingale and $ A ( t) $ is the compensator with respect to the $ \sigma $- fields $ {\mathcal F} _ {t} $ generated by the random points $ t _ {i} \in [ 0 , t ] $. Many important problems can be solved in terms of properties of the compensator $ A ( t) $.

Let $ X $ be a complete separable metric space, $ \mathfrak B _ {0} $ the class of bounded Borel sets $ B \subset X $, $ N = \{ \phi \} $ the set of all measures that take integral values, $ \phi ( B) = l < \infty $, and $ \mathfrak N $ the minimal $ \sigma $- field generated by the subsets of measures $ \{ \phi : {\phi ( B) = l } \} $ for $ B \in \mathfrak B _ {0} $ and $ l = 0 , 1 , 2 ,\dots $. Specifying a probability measure $ {\mathsf P} $ in the measurable space $ ( N , \mathfrak N ) $ determines a stochastic point process $ \Phi $ with state space $ X $ whose realizations are integer-valued measures on $ N $. The values $ x \in X $ for which $ \Phi \{ x \} > 0 $ are called the points of $ \Phi $. The quantity $ \Phi ( B) $ is equal to the sum of the multiplicities of the points of $ \Phi $ that lie in $ B $. $ \Phi $ is called simple if $ \Phi \{ x \} \leq 1 $ for all $ x \in X $ and ordinary if, for all $ B \in \mathfrak B _ {0} $ and $ \epsilon > 0 $, there is a partition $ \zeta = ( Z _ {1} \dots Z _ {n} ) $ of $ B $ such that

$$ \sum _ { k= } 1 ^ { n } {\mathsf P} \{ \Phi ( Z _ {k} ) > 1 \} < \epsilon . $$

Ordinary stochastic point processes are simple. An important role is played by the factorial moment measures

$$ \Lambda _ {k} ( B) = {\mathsf E} _ {p} \Phi ( B) [ \Phi ( B) - 1 ] \dots [ \Phi ( B) - k + 1 ] $$

and their extensions ( $ {\mathsf E} _ {p} $ is the mathematical expectation and $ \Lambda _ {1} ( B) $ is called the measure of intensity). If $ \Lambda _ {2n} ( B) < \infty $, then

$$ \sum _ { k= } 0 ^ { 2n- } 1 \frac{( - 1 ) ^ {k} }{k!} \Lambda _ {k} ( B) \leq {\mathsf P} \{ \Phi \{ B \} = 0 \} \leq \ \sum _ { k= } 0 ^ { 2n } \frac{( - 1 ) ^ {k} }{k!} \Lambda _ {k} ( B) , $$

$$ \Lambda _ {0} ( B) = 1 . $$

A special role in the theory of stochastic point processes is played by Poisson stochastic point processes $ \Phi $, for which: a) the values of $ \Phi $ on disjoint $ B _ {i} \in \mathfrak B _ {0} $ are mutually-independent random variables (the property of absence of after-effect); and b)

$$ {\mathsf P} \{ \Phi ( B _ {i} ) = l \} = \ \frac{[ \Lambda _ {1} ( B) ] ^ {l} }{l!} \mathop{\rm exp} \{ - \Lambda _ {1} ( B) \} . $$

For a simple stochastic point process,

$$ \tag{* } \Lambda _ {1} ( B) = \inf \ \sum _ { k= } 1 ^ { n } {\mathsf P} \{ \Phi ( Z _ {k} ) > 0 \} , $$

where the infimum is taken over all partitions $ \zeta = \{ Z _ {1} \dots Z _ {m} \} $ of $ B $. The relation (*) makes it possible to find explicit expressions for the measure of intensity for many classes of stochastic point processes generated by stochastic processes or random fields.

A generalization of stochastic point processes are the so-called marked stochastic point processes, in which marks $ k ( x) $ from some measurable space $ [ K , \mathfrak N ] $ are assigned to points $ x $ with $ \Phi \{ x \} > 0 $. The service times in a queueing system can be regarded as marks.

In the theory of stochastic point processes, an important role is played by relations connecting, in a special way, given conditional probabilities of distinct events (Palm probabilities). Limit theorems have been obtained for superposition (summation), thinning out and other operations on sequences of stochastic point processes. Various generalizations of Poisson stochastic point processes are widely used in applications.

References

[1] A.Ya. Khinchin, "Mathematical methods in the theory of queueing" , Griffin (1960) (Translated from Russian)
[2] D.R. Cox, V. Isham, "Point processes" , Chapman & Hall (1980)
[3] J. Kerstan, K. Matthes, J. Mecke, "Infinitely divisible point processes" , Wiley (1978) (Translated from German)
[4] Yu.K. Belyaev, "Elements of the general theory of point processes" (Appendix to Russian translation of: H. Cramér, M. Leadbetter, Stationary and related stochastic processes, Wiley, 1967)
[5] R.S. Liptser, A.N. Shiryaev, "Statistics of random processes" , II. Applications , Springer (1978) (Translated from Russian)
[6] M. Jacobson, "Statistical analysis of counting processes" , Lect. notes in statistics , 12 , Springer (1982)

Comments

Let $ X $, $ \mathfrak B _ {0} $ be as above; let $ \mathfrak X \supset \mathfrak B _ {0} $ be the Borel field of $ X $. Let $ M $ be the collection of all Borel measures on $ ( X, \mathfrak X ) $. For each $ B \in \mathfrak B _ {0} $, $ \mu \mapsto \mu ( B) $ defines a mapping $ M \rightarrow \mathbf R _ \geq 0 $, and $ \mathfrak M $ is the $ \sigma $- field generated by those mappings, i.e. the smallest $ \sigma $- field making all these mappings measurable. The integral-valued elements of $ M $ form the subspace $ N $ and $ \mathfrak N $ is the induced $ \sigma $- field on $ N \subset M $.

A random measure on $ X $ is simply a probability measure on $ ( M, \mathfrak M ) $ or, equivalently, a measurable mapping $ \zeta $ of some abstract probability space $ ( \Omega , {\mathcal A} , {\mathsf P} ) $ into $ ( M, \mathfrak M ) $. A point process is the special case that $ \zeta $ takes its values in $ N $.

An element $ \nu \in N $ is simple if $ \nu \{ x \} = 0 $ or $ 1 $ for all $ x \in X $. A simple point process is one that takes its values in the subspace of $ N $ consisting of the simple measures.

Each $ B \in \mathfrak B _ {0} $ defines a function $ M \rightarrow \mathbf R _ \geq 0 $, $ \mu \mapsto \mu ( B) $, and, hence, gives a random measure $ \xi $, a random variable which will be denoted by $ \xi B $. One can think of a random measure in two ways: a collection of measures (on $ X $) $ \xi ( \omega ) $ parametrized by a probability space $ ( \Omega , {\mathcal A} , {\mathsf P} ) $ or a collection of random variables $ \xi B $( on $ \Omega $ or on $ M $) indexed by $ \mathfrak B _ {0} $, depending on which part of the mapping $ ( \omega , B ) \mapsto \xi ( \omega )( B ) $ one focuses on.

More generally, for each bounded continuous function $ f $ on $ X $ one has the random variable $ \xi f $ defined by

$$ \xi f ( \mu ) = \int\limits _ { X } f( x) \mu ( dx) . $$

For each random measure $ \xi $ one defines the Palm distributions of $ \xi $. For a simple point process $ \xi $ the Palm distribution $ Q _ {x} $ can be thought of as the conditional distribution of $ \xi $ given that $ \xi $ has an atom at $ x \in X $. Palm distributions are of great importance in random measure theory and have applications to queueing theory, branching processes, regenerative sets, stochastic geometry, statistical mechanics, and insurance mathematics (the last, via doubly stochastic Poisson processes, also called Cox processes, which are Poisson processes with stochastic variation in the intensity).

The Palm distribution of a random measure is obtained by disintegrating its Campbell measure on $ X \times M $, which is given by

$$ C( B \times {\mathcal M} ) = \ {\mathsf E} [( \xi B ) 1 _ {\mathcal M} ] $$

for $ B \in \mathfrak B _ {0} $, $ {\mathcal M} \in \mathfrak M $, where $ 1 _ {\mathcal M} $ is the indicator function of $ {\mathcal M} \subset M $, the function $ ( \xi B ) 1 _ {\mathcal M} $ is the (pointwise) product of the two function $ \xi B $ and $ 1 _ {\mathcal M} : M \rightarrow \mathbf R $ and $ {\mathsf E} $ stands for expectation.

Disintegration of a measure is much related to conditional distributions (cf. Conditional distribution). Given two measurable spaces $ ( X, \mathfrak X ) $ and $ ( T, {\mathcal T} ) $, a kernel, also called a Markov kernel, from $ X $ to $ T $ is a mapping $ \rho : X \times {\mathcal T} \rightarrow \mathbf R _ \geq 0 $ such that $ \rho ( \cdot , A ) : x \mapsto \rho ( x, A ) $ is measurable on $ X $ for all $ A \in {\mathcal T} $ and such that $ \rho _ {x} = \rho ( x, \cdot ): A \mapsto \rho ( x, A ) $ is a $ \sigma $- finite measure on $ ( T, {\mathcal T} ) $ for all $ x \in X $.

Given a $ \sigma $- finite measure $ \mu $ on the product space $ X \times T $, a disintegration of $ \mu $ consists of a $ \sigma $- finite measure $ \nu $ on $ X $ and a kernel $ \rho $ from $ X $ to $ T $ such that $ \rho _ {x} ( T ) \neq 0 $ $ \nu $- almost everywhere and such that for all $ ( B, A) \in \mathfrak X \times {\mathcal T} $,

$$ \tag{a1 } \mu ( B \times A ) = \ \int\limits _ { B } \rho _ {x} ( A) \nu ( dx) . $$

It follows that for every measurable function $ f: X \times T \rightarrow \mathbf R _ \geq 0 $,

$$ \tag{a2 } \int\limits \int\limits f( x, t) \mu ( dx dt) = \ \int\limits \nu ( dx) \int\limits f( x, t) \rho _ {x} ( dt) . $$

The inverse operation is called mixing. Given $ \nu $ and $ \rho $, the measure (a1) is called the mixture of the $ \rho _ {x} $ with respect to $ \nu $( and (a2) could be called the Fubini formula for mixture measures).

A disintegration exists for a $ \sigma $- finite $ \mu $ if $ ( T, {\mathcal T} ) $ is Polish Borel. This reduces to a matter of conditional distributions. The measure $ \nu $ is unique up to equivalence, and $ \rho $ is unique up to a measurable renormalization $ \nu $- almost everywhere. More generally one studies disintegration (or decomposition into slices) of a measure $ \mu $ on a space $ Y $ relative to any mapping $ \pi : Y \rightarrow X $( instead of the projection $ Y = X \times T \rightarrow X $, cf. [a11], [a12]).

For each bounded continuous function $ f $, let $ {\mathsf E} ( \xi f ) $ be the expectation of the random variable $ \xi f $ and let $ {\mathsf E} \xi $ be the measure $ {\mathsf E} \xi ( B) = {\mathsf E} ( \xi B ) $ on $ X $. Then, using (a2), the disintegration of the Campbell measure $ C $ on $ X \times M $ yields the measure $ {\mathsf E} \xi $ on $ X $ and, if $ {\mathsf E} \xi $ is $ \sigma $- finite, the $ \rho _ {x} $ can be normalized $ {\mathsf E} \xi $- almost everywhere to probability measures $ Q _ {x} $ on $ M $ to give

$$ {\mathsf E} ( \xi f 1 _ {\mathcal M} ) = \int\limits Q _ {x} ( {\mathcal M} ) f( x) {\mathsf E} \xi ( dx) . $$

The $ Q _ {x} $ are the Palm distributions (Palm probabilities) of $ \xi $. Equivalently, as a function of $ x $, $ Q _ {x} ( {\mathcal M} ) $ for $ {\mathcal M} \in \mathfrak M $ is $ {\mathsf E} \xi $- almost everywhere the Radon–Nikodým derivative (cf. Radon–Nikodým theorem) of the measure $ {\mathsf E} ( 1 _ {\mathcal M} ( \xi ) \xi ) $ on $ X $ with respect to $ {\mathsf E} \xi $. Here $ 1 _ {\mathcal M} ( \xi ) \xi $ is the random measure $ \Omega \rightarrow M $,

$$ ( 1 _ {\mathcal M} ( \xi ) \xi )( \omega ) = \ \left \{

i.e. the trace of $ \xi $ on $ {\mathcal M} $.

References

[a1] A.A. Borovkov, "Stochastic processes in queueing theory" , Springer (1976) (Translated from Russian)
[a2] P.A.W. Lewis (ed.) , Stochastic point processes: statistical analysis theory and applications , Wiley (Interscience) (1972)
[a3] V.K. Murthy, "The general point process" , Addison-Wesley (1974)
[a4] D.C. Snyder, "Random point processes" , Wiley (1975)
[a5] D.J. Daley, D. Vere-Jones, "An introduction to the theory of point processes" , Springer (1978)
[a6] F. Baccelli, P. Brémaud, "Palm probabilities and stationary queues" , Lect. notes in statistics , 41 , Springer (1987)
[a7] P. Brémaud, "Point processes and queues - Martingale dynamics" , Springer (1981)
[a8] J. Neveu, "Processus ponctuels" J. Hoffmann-Jørgensen (ed.) T.M. Liggett (ed.) J. Neveu (ed.) , Ecole d'été de St. Flour VI 1976 , Lect. notes in math. , 598 , Springer (1977) pp. 250–448
[a9] O. Kallenberg, "Random measures" , Akademie Verlag & Acad. Press (1986)
[a10] J. Grandell, "Doubly stochastic Poisson processes" , Springer (1976)
[a11] H. Bauer, "Probability theory and elements of measure theory" , Holt, Rinehart & Winston (1972) (Translated from German)
[a12] N. Bourbaki, "Intégration" , Eléments de mathématiques , Hermann (1967) pp. Chapt. 5: Intégration des mesures, §6.6
[a13] N. Bourbaki, "Intégration" , Eléments de mathématiques , Hermann (1959) pp. Chapt. 6: Intégration vectorielle, §3
How to Cite This Entry:
Stochastic point process. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stochastic_point_process&oldid=48854
This article was adapted from an original article by Yu.K. Belyaev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article