Namespaces
Variants
Actions

Jump process

From Encyclopedia of Mathematics
Jump to: navigation, search


A stochastic process that changes its state only at random moments of time forming an increasing sequence. The term "jump process" is sometimes applied to any process with piecewise-constant trajectories.

An important class of jump processes is formed by Markov jump processes. A Markov process is a jump process if its transition function $ P ( s , x , t , B) $ is such that

$$ \tag{1 } \lim\limits _ {t \downarrow s } \ \frac{P ( s , x , t , B ) - I _ {B} ( x) }{t - s } = \ q ( s , x , B ) , $$

where $ I _ {B} ( x) $ is the indicator of the set $ B $ in the phase space $ ( E , {\mathcal E} ) $, and if the regularity condition holds, i.e. the convergence in (1) is uniform and the kernel $ q ( s , x , B ) $ satisfies certain boundedness and continuity conditions.

Let

$$ a ( t , x ) = - q ( t , x , \{ x \} ) ,\ \ a ( t , x , B ) = q ( t , x , B \setminus \{ x \} ) , $$

$$ \Phi ( t , x , B ) = \left \{ \begin{array}{ll} \frac{a ( t , x , B ) }{a ( t , x ) } & \textrm{ if } a ( t , x ) > 0 , \\ 0 & \textrm{ otherwise } . \\ \end{array} \right .$$

These quantities admit the following interpretation: up to $ o ( \Delta t ) $( as $ \Delta t \rightarrow 0 $), $ a ( t , x ) \Delta t $ is the probability that in the time interval $ ( t , t + \Delta t ) $ the process leaves the state $ x $, and $ \Phi ( t , x , B ) $( for $ a ( t , x ) > 0 $) is the conditional probability that the process hits the set $ B $, provided that it leaves the state $ x $ at the time $ t $.

When the regularity conditions hold, the transition function of a jump process is differentiable with respect to $ t $ when $ t > s $ and with respect to $ s $ when $ s < t $, and satisfies the forward and backward Kolmogorov equation with corresponding boundary conditions:

$$ \frac{\partial P ( s , x , t , B ) }{\partial t } = - \int\limits _ { B } a ( t , y ) P ( s , x , t , d y ) + $$

$$ + \int\limits _ { E } a ( t , y , B ) P ( s , x , t , d y ) , $$

$$ \lim\limits _ {t \downarrow s } P ( s , x , t , B ) = I _ {B} ( x) ; $$

$$ \frac{\partial P ( s , x , t , B ) }{\partial s } = \ a ( s , x ) \left [ P ( s , x , t , B ) + \right .$$

$$ \left . - \int\limits _ { E } P ( s , y , t , B ) \Phi ( s , x , d y ) \right ] , $$

$$ \lim\limits _ {s \uparrow t } P ( s , x , t , B ) = I _ {B} ( x) . $$

Let $ X = ( X _ {t} ) _ {t \geq 0 } $ be a strictly Markov jump process continuous from the right, let $ T _ {n} $ be the moment of the $ n $- th jump of the process, $ T _ {0} = 0 $, let $ Y _ {n} = X _ {T _ {n} } $, let $ S _ {n} $ be the duration of remaining in state $ n $, let $ T _ \infty = \lim\limits T _ {n} $ be the moment of cut-off, and let $ X _ {T _ \infty } = \delta $, where $ \delta $ is a point outside $ E $. Then the sequence $ ( T _ {n} , Y _ {n} ) $ forms a homogeneous Markov chain. Note that if $ X $ is a homogeneous Markov process, then at a prescribed $ Y _ {n} = x $, $ S _ {n} $ is exponentially distributed with parameter $ \lambda ( x) $.

A natural generalization of Markov jump processes are semi-Markov jump processes, for which the sequence $ ( Y _ {n} ) $ is a Markov chain but the duration of remaining in the state $ n $ depends on $ Y _ {n} $ and $ Y _ {n+} 1 $, and has an arbitrary distribution.

In the investigation of general jump processes, the so-called martingale approach has proved fruitful. Within the boundaries of this approach one can obtain meaningful results without additional assumptions about the probability structure of the processes. In the martingale approach one assumes that on the probability space $ ( \Omega , {\mathcal E} , {\mathsf P} ) $ of a given jump process $ X $ a non-decreasing right-continuous family of $ \sigma $- algebras $ {\mathcal F} =( {\mathcal F} _ {t} ) _ {t \geq 0 } $, $ {\mathcal F} _ {t} \subset {\mathcal F} $, is fixed such that the random variable $ X _ {t} $ is $ {\mathcal F} _ {t} $- measurable for every $ t $, so that the $ T _ {n} $ are Markov moments.

Let $ {\mathcal P} $ be a predictable sigma-algebra on $ \Omega \times \mathbf R _ {+} $, and put $ {\mathcal P} tilde = {\mathcal P} \times {\mathcal E} $. A random measure $ \eta $ on $ ( \mathbf R _ {+} \times E , {\mathcal B} ( \mathbf R _ {+} ) \otimes {\mathcal E} ) $ is said to be predictable if for any non-negative $ {\mathcal P} tilde $- measurable function $ f $ the process $ ( f \star \eta _ {t} ) _ {t \geq 0 } $, where

$$ f \star \eta _ {t} = \ \int\limits _ {( 0 , t ] \times E } f ( t , x ) \eta ( d t , d x ) , $$

is predictable.

Let $ \mu = \mu ( d t , d x ) $ be the jump measure of $ X $, that is, the integral random measure on $ ( \mathbf R _ {+} \times E , {\mathcal B} ( \mathbf R _ {+} ) \otimes {\mathcal E} ) $ given by

$$ \mu ( [ 0 , t ] \times B ) = \ \sum _ {n \geq 1 } I _ {[ 0 , t ] \times B } ( T _ {n} , Y _ {n} ) ,\ \ t \in \mathbf R _ {+} ,\ \ B \in {\mathcal E} . $$

Under very general conditions on $ ( E , {\mathcal E} ) $( that hold, for example, when $ E $ is a complete separable metric space with a Borel $ \sigma $- algebra $ {\mathcal E} $), there is a predictable random measure $ \nu = \nu ( d t , d x ) $ such that either of the following two equivalent conditions hold:

1) $ {\mathsf E} f \star \mu _ \infty = {\mathsf E} f \star \nu _ \infty $ for any non-negative $ {\mathcal P} tilde $- measurable function $ f $;

2) for all $ n \geq 1 $ and $ B \in {\mathcal E} $ the process

$$ ( \mu ( [ 0 , t \wedge T _ {n} ] \times B ) - \nu ( [ 0 , t \wedge T _ {n} ] \times B ) ) _ {t \geq 0 } $$

is a martingale emanating from zero.

The predictable random measure $ \nu $ is uniquely defined up to a set of $ {\mathsf P} $- measure zero and is called the compensator (or dual predictable projection) of $ \mu $. One can choose a variant of $ \nu $ such that

$$ \tag{2 } \nu ( ( T _ \infty , \infty ) \times E ) = 0 ,\ \ \nu ( \{ t \} \times E) \leq 1 \ \textrm{ for all } t . $$

Let $ \Omega $ be the space of trajectories of a jump process $ X $, taking values in $ ( E , {\mathcal E} ) $, let $ {\mathcal F} _ {t} = \sigma ( X _ {s} , s \leq t ) $, $ {\mathcal F} = \cup _ {t > 0 } {\mathcal F} _ {t} $, and let $ {\mathsf P} _ {0} $ be a probability measure for which (2) holds. Then there is a probability measure $ {\mathsf P} $ on $ ( \Omega , {\mathcal F} ) $, which is also unique, such that $ \nu $ is the compensator of $ \mu $ with respect to $ {\mathsf P} $ and such that the restriction of $ {\mathsf P} $ to $ {\mathcal F} _ {0} $ coincides with $ {\mathsf P} _ {0} $. The proof of this relies on an explicit formula relating the conditional distributions of the variables $ ( T _ {n} , Y _ {n} ) $ to the compensator, which in a number of cases has turned out to be a more convenient means of describing jump processes.

A jump process is a stochastic process with independent increments if and only if the corresponding compensator is determinate.

References

[1] A.N. [A.N. Kolmogorov] Kolmogoroff, "Ueber die analytischen Methoden in der Wahrscheinlichkeitstheorie" Math. Ann. , 104 (1931) pp. 415–458
[2] I.I. [I.I. Gikhman] Gihman, A.V. [A.V. Skorokhod] Skorohod, "The theory of stochastic processes" , 2 , Springer (1975) pp. Chapt. 3 (Translated from Russian)
[3] J. Jacod, "Calcul stochastique et problèmes de martingales" , Lect. notes in math. , 714 , Springer (1979)

Comments

References

[a1] E.B. Dynkin, "Markov processes", I, Springer (1965) pp. Chapt. 3 (Translated from Russian)
[a2] W. Feller, "An introduction to probability theory and its applications", 2, Wiley (1966) pp. Chapt. X
[a3] M. Rosenblatt, "Random processes", Springer (1974)
[a4] L.P. Breiman, "Probability", Addison-Wesley (1968)
How to Cite This Entry:
Jump process. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Jump_process&oldid=47472
This article was adapted from an original article by Yu.M. Kabanov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article