Namespaces
Variants
Actions

Difference between revisions of "Random walk"

From Encyclopedia of Mathematics
Jump to: navigation, search
(→‎References: Feller: internal link)
m (tex encoded by computer)
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
A [[Stochastic process|stochastic process]] of special form that can be interpreted as a model describing the movement of a particle in a certain state space under the action of some random mechanism. The state space is usually a <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r0773801.png" />-dimensional Euclidean space or the integral lattice in it. The random mechanism can be of various kinds; the most common random walks are those generated by summation of independent random variables or by Markov chains. There is no precise generally-accepted definition of a random walk.
+
<!--
 +
r0773801.png
 +
$#A+1 = 144 n = 0
 +
$#C+1 = 144 : ~/encyclopedia/old_files/data/R077/R.0707380 Random walk
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
  
The trajectories of the simplest random walk in the case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r0773802.png" /> are described by the initial position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r0773803.png" /> and the sequence of sums
+
{{TEX|auto}}
 +
{{TEX|done}}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r0773804.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
A [[Stochastic process|stochastic process]] of special form that can be interpreted as a model describing the movement of a particle in a certain state space under the action of some random mechanism. The state space is usually a  $  d $-
 +
dimensional Euclidean space or the integral lattice in it. The random mechanism can be of various kinds; the most common random walks are those generated by summation of independent random variables or by Markov chains. There is no precise generally-accepted definition of a random walk.
  
where the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r0773805.png" /> are independent and have a Bernoulli distribution:
+
The trajectories of the simplest random walk in the case  $  d = 1 $
 +
are described by the initial position  $  S _ {0} = 0 $
 +
and the sequence of sums
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r0773806.png" /></td> </tr></table>
+
$$ \tag{1 }
 +
S _ {n}  = X _ {1} + \dots + X _ {n} ,\  n = 1 , 2 \dots
 +
$$
  
The value of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r0773807.png" /> can be interpreted as the gain of one of two players after <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r0773808.png" /> games in each of which this player wins one dollar with probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r0773809.png" /> and loses it with probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738010.png" />. If the game consists of tossing an unbiased coin, then it is assumed that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738011.png" /> (a symmetric walk, see [[Bernoulli random walk|Bernoulli random walk]]). Under the assumption that the initial capital of the first player is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738012.png" /> and that of the second player is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738013.png" />, the game will finish when the moving particle (with coordinates <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738014.png" />) first touches one of the levels <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738015.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738016.png" />. At this moment, one of the players is ruined. This is the classical ruin problem, in which the boundaries at points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738017.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738018.png" /> can be regarded as absorbing.
+
where the $  X _ {i} $
 +
are independent and have a Bernoulli distribution:
  
In applications to [[Queueing theory|queueing theory]], the behaviour of the particle near the boundaries <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738019.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738020.png" /> can differ: e.g., if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738021.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738022.png" />, then the position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738023.png" /> of the random particle at the moment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738024.png" /> is given by
+
$$
 +
{\mathsf P} \{ X _ {i} = 1 \}  = p ,\ \
 +
{\mathsf P} \{ X _ {i} = - 1 \}
 +
= = 1 - p ,\  p \in ( 0 , 1 ) .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738025.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
The value of  $  S _ {n} $
 +
can be interpreted as the gain of one of two players after  $  n $
 +
games in each of which this player wins one dollar with probability  $  p $
 +
and loses it with probability  $  1 - p $.
 +
If the game consists of tossing an unbiased coin, then it is assumed that  $  p = 1/2 $(
 +
a symmetric walk, see [[Bernoulli random walk|Bernoulli random walk]]). Under the assumption that the initial capital of the first player is  $  b $
 +
and that of the second player is  $  a $,
 +
the game will finish when the moving particle (with coordinates  $  S _ {1} , S _ {2} ,\dots $)
 +
first touches one of the levels  $  a $
 +
or  $  - b $.  
 +
At this moment, one of the players is ruined. This is the classical ruin problem, in which the boundaries at points  $  a $
 +
and  $  - b $
 +
can be regarded as absorbing.
  
and the boundary at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738026.png" /> can be called detaining or reflecting. There are also other possibilities for the behaviour of the particle in a neighbourhood of the boundaries.
+
In applications to [[Queueing theory|queueing theory]], the behaviour of the particle near the boundaries  $  a $
 +
and $  - b $
 +
can differ: e.g., if  $  a = \infty $
 +
and  $  b = 0 $,
 +
then the position  $  Z _ {n+} 1 $
 +
of the random particle at the moment  $  n + 1 $
 +
is given by
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738027.png" />, then one obtains the problem of a random walk with one boundary. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738028.png" />, then one obtains an unrestricted random walk. Random walks are usually studied using the apparatus of discrete Markov chains and, in particular, by investigating the corresponding finite-difference equations. For example, in the ruin problem, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738029.png" /> be the probability that the first player is ruined, given that his initial capital is equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738030.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738031.png" />, where the total capital of both players is fixed and equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738032.png" />. Then, by the total probability formula (at the first jump), it follows that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738033.png" /> satisfies the equation
+
$$ \tag{2 }
 +
Z _ {n+} 1  = \max ( 0 , Z _ {n} + X _ {n+} 1 ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738034.png" /></td> </tr></table>
+
and the boundary at  $  0 $
 +
can be called detaining or reflecting. There are also other possibilities for the behaviour of the particle in a neighbourhood of the boundaries.
  
and the boundary conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738035.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738036.png" />. Thus one obtains
+
If  $  a = \infty $,
 +
then one obtains the problem of a random walk with one boundary. If  $  a = b = \infty $,
 +
then one obtains an unrestricted random walk. Random walks are usually studied using the apparatus of discrete Markov chains and, in particular, by investigating the corresponding finite-difference equations. For example, in the ruin problem, let  $  u _ {k} $
 +
be the probability that the first player is ruined, given that his initial capital is equal to  $  k $,  
 +
0 \leq  k \leq  a + b $,
 +
where the total capital of both players is fixed and equal to  $  a + b $.  
 +
Then, by the total probability formula (at the first jump), it follows that  $  u _ {k} $
 +
satisfies the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738037.png" /></td> </tr></table>
+
$$
 +
u _ {k}  = p u _ {k+} 1 + qu _ {k-} 1 ,\  0 < k < a + b ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738038.png" /></td> </tr></table>
+
and the boundary conditions  $  u _ {a+} b = 0 $,
 +
$  u _ {0} = 1 $.  
 +
Thus one obtains
  
The second of these formulas shows that even "fair" game (in which both players have identical chances) leads to ruin with probability close to 1, provided that the capital <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738039.png" /> of the second player is large in comparison to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738040.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738041.png" /> when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738042.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738043.png" />).
+
$$
 +
u _ {b}  = 
 +
\frac{\left (
 +
\frac{q}{p}
 +
\right )
 +
^ {a+} b - \left (
 +
\frac{q}{p}
 +
  \right ) ^ {b} }{\left (  
 +
\frac{q}{p}
 +
\right ) ^ {a+} b - 1 }
 +
\ \
 +
\textrm{ when }  p \neq q ;
 +
$$
  
The ruin problem has been thoroughly investigated. For example, it was shown by J. Lagrange (see [[#References|[1]]], Vol. 1) that the probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738044.png" /> of ruin of the first player at the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738045.png" />-th step, given that this initial capital is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738046.png" />, where the total capital is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738047.png" /> (fixed), is equal to
+
$$
 +
u _ {b}  =
 +
\frac{a}{a+}
 +
b \  \textrm{ when }  p = q =  
 +
\frac{1}{2}
 +
.
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738048.png" /></td> </tr></table>
+
The second of these formulas shows that even a  "fair" game (in which both players have identical chances) leads to ruin with probability close to 1, provided that the capital  $  a $
 +
of the second player is large in comparison to  $  b $(
 +
$  u _ {b} = 1 $
 +
when  $  b < \infty $,
 +
$  a = \infty $).
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738049.png" /></td> </tr></table>
+
The ruin problem has been thoroughly investigated. For example, it was shown by J. Lagrange (see [[#References|[1]]], Vol. 1) that the probability  $  u _ {b,n} $
 +
of ruin of the first player at the  $  n $-
 +
th step, given that this initial capital is  $  b $,
 +
where the total capital is  $  a+ b $(
 +
fixed), is equal to
  
The mean time before ruin of one of the players, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738050.png" />, is given by
+
$$
 +
u _ {b,n}  = ( a + b )  ^ {-} 1 2  ^ {n}
 +
p ^ {( n- b )/ 2 } q ^ {( n+ b )/ 2 } \times
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738051.png" /></td> </tr></table>
+
$$
 +
\times
 +
\sum _ { k= } 1 ^ { a+ }  b- 1 \cos  ^ {n-} 1 
 +
\frac{\pi k }{a+}
 +
b  \sin 
 +
\frac{\pi k }{a+}
 +
b  \sin 
 +
\frac{\pi b k }{a+}
 +
b .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738052.png" /></td> </tr></table>
+
The mean time before ruin of one of the players,  $  m _ {b} $,
 +
is given by
  
For random walks with one boundary <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738053.png" />, described by (2), there is a stationary distribution for the random walk when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738054.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738055.png" />, coinciding with the distribution of the random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738056.png" /> and
+
$$
 +
m _ {b}  =
 +
\frac{b}{q-}
 +
p - a+
 +
\frac{b}{q-}
 +
p
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738057.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
\frac{1 - \left (
 +
\frac{q}{p}
 +
\right )  ^ {b} }{1 - \left (
 +
\frac{q}{p}
 +
\right )  ^ {a+} b }
 +
\  \textrm{ if } \
 +
p \neq q ;
 +
$$
  
The laws describing an unrestricted random walk follow from theorems about the behaviour of the sequence of partial sums <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738058.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738059.png" />. One of these laws confirms that for a symmetric random walk <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738060.png" />, the particle hits (infinitely often) any fixed point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738061.png" /> with probability 1. When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738062.png" />, the walk departs to the left with probability 1; in this case <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738063.png" /> with probability 1.
+
$$
 +
m _ {b}  = a b \  \textrm{ if }  p = q =  
 +
\frac{1}{2}
 +
.
 +
$$
  
For a symmetric random walk, the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738064.png" /> spent by the particle on the positive half-line (the number of positive terms in the sequence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738065.png" />) will be closer to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738066.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738067.png" /> than to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738068.png" /> with a large probability. This is seen from the so-called [[Arcsine law|arcsine law]], which implies that for large <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738069.png" /> (see [[#References|[1]]], Vol. 1),
+
For random walks with one boundary  $  ( a = \infty ) $,  
 +
described by (2), there is a stationary distribution for the random walk when  $  p < q $
 +
and  $  n \rightarrow \infty $,
 +
coinciding with the distribution of the random variable  $  S = \sup _ {k \geq  0 }  S _ {k} $
 +
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738070.png" /></td> </tr></table>
+
$$ \tag{3 }
 +
{\mathsf P} \{ S \geq  k \}  = \
 +
\left (
 +
\frac{p}{q}
 +
\right )  ^ {k} ,\ \
 +
k = 0 , 1 ,\dots .
 +
$$
 +
 
 +
The laws describing an unrestricted random walk follow from theorems about the behaviour of the sequence of partial sums  $  S _ {n} $,
 +
$  n = 1 , 2 ,\dots $.  
 +
One of these laws confirms that for a symmetric random walk  $  ( p = 1/2 ) $,
 +
the particle hits (infinitely often) any fixed point  $  a $
 +
with probability 1. When  $  p < 1/2 $,
 +
the walk departs to the left with probability 1; in this case  $  S < \infty $
 +
with probability 1.
 +
 
 +
For a symmetric random walk, the time  $  K _ {n} $
 +
spent by the particle on the positive half-line (the number of positive terms in the sequence  $  S _ {1} \dots S _ {n} $)
 +
will be closer to  $  0 $
 +
or  $  n $
 +
than to  $  n / 2 $
 +
with a large probability. This is seen from the so-called [[Arcsine law|arcsine law]], which implies that for large  $  n $(
 +
see [[#References|[1]]], Vol. 1),
 +
 
 +
$$
 +
{\mathsf P} \left \{
 +
\frac{K _ n}{n}
 +
 
 +
< x \right \}  \approx 
 +
\frac{2} \pi
 +
 
 +
{ \mathop{\rm arc}  \sin }  \sqrt x .
 +
$$
  
 
This implies, for example, that
 
This implies, for example, that
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738071.png" /></td> </tr></table>
+
$$
 +
{\mathsf P} \left \{ \left |
 +
\frac{K _ {n} }{n}
 +
-
  
and that with probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738072.png" />, the particle spends at least <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738073.png" /> of the whole time on one side.
+
\frac{1}{2}
 +
\right | <  
 +
\frac{1}{4}
 +
\right \}  = \
 +
1 - 2 {\mathsf P} \left \{
 +
\frac{K _ {n} }{n}
  
There are relations connecting random walks with boundaries with unrestricted random walks. For example, if one assumes that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738074.png" />, then (see [[#References|[2]]])
+
<  
 +
\frac{1}{4}
 +
\right \}  \approx \
 +
1 -
 +
\frac{4} \pi
 +
 +
\frac \pi {6}
 +
  =
 +
\frac{1}{3}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738075.png" /></td> </tr></table>
+
$$
  
In an unrestricted random walk, the position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738076.png" /> of the particle for large <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738077.png" /> is described by the [[Law of large numbers|law of large numbers]] and the [[Central limit theorem|central limit theorem]].
+
and that with probability  $  0 .2 $,
 +
the particle spends at least  $  97.6 \pct $
 +
of the whole time on one side.
  
If the magnitudes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738078.png" /> of the jumps are changed to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738079.png" /> (for small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738080.png" />) and if one assumes that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738081.png" />, then the position <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738082.png" /> of the particle after <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738083.png" /> steps will describe approximately (as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738084.png" />) the behaviour at time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738085.png" /> of the [[Diffusion process|diffusion process]] with drift <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738086.png" />, diffusion coefficient 1, and corresponding behaviour on the boundaries <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738087.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738088.png" /> (if these are specified for the random walk <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738089.png" />).
+
There are relations connecting random walks with boundaries with unrestricted random walks. For example, if one assumes that $  Y ( x) = \min \{ {k } : {S _ {k} \geq  x } \} $,  
 +
then (see [[#References|[2]]])
  
The notion of a random walk as described above can be generalized in many ways. The simplest random walks in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738090.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738091.png" /> are defined as follows. The particle leaves the origin and moves one step of length 1 in one of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738092.png" /> directions parallel to the coordinate axes. Thus, its possible positions are the points of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738093.png" /> with integer coordinates. To define a random walk it is necessary to specify <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738094.png" /> probabilities corresponding to the different directions. One obtains a symmetric random walk if all these are equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738095.png" />. In the multi-dimensional case, the problem of a random walk with boundaries becomes much more complicated, since the shape of the boundaries becomes essentially more complex when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738096.png" />. In the unrestricted case, Pólya's theorem holds (see [[#References|[1]]], Vol. 1): for a symmetric random walk when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738097.png" />, the particle will sooner or later return to its initial position with probability 1 (once, and thus infinitely often); but when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738098.png" /> this probability is approximately equal to 0.35 (and when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r07738099.png" /> it is still smaller).
+
$$
 +
{\mathsf P} \{ Y ( x) = n \}  = \
  
Another possible generalization of the simplest random walk consists of taking arbitrarily distributed independent random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380100.png" /> in (1). In this case, the basic qualitative laws for unrestricted random walks and for random walks with boundaries are preserved. For example, the particle reaches one of the boundaries <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380101.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380102.png" /> with probability 1. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380103.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380104.png" />, then the boundary <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380105.png" /> will be reached with probability 1. If the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380106.png" /> are integer valued and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380107.png" />, then the particle will return to the initial position with probability 1. For arbitrarily distributed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380108.png" /> with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380109.png" />, this statement only holds in case one considers return to an interval, not to a point.
+
\frac{x}{n}
 +
{\mathsf P} \{ S _ {n} = x \} .
 +
$$
  
The solution of problems concerned with the exit of random walks from an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380110.png" /> turns out to be much more difficult in the general case. At the same time, these problems have many applications in mathematical statistics (sequential analysis), in the insurance business, in queueing theory, etc. In their investigation, a decisive role is played by various functionals of a random walk <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380111.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380112.png" /> called boundary functionals. Among these are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380113.png" />, the time of the first passage through zero (in a positive direction), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380114.png" />, the time of the first passage through zero (in a negative direction), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380115.png" />, the first positive sum <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380116.png" />, and the first non-positive sum <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380117.png" />, etc. It turns out that the distribution of the jumps <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380118.png" /> is connected with the distribution of these functionals by the following so-called factorization identity (see [[#References|[1]]], Vol. 1; [[#References|[2]]], [[#References|[3]]]; see also [[Factorization identities|Factorization identities]]). Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380119.png" /> be the characteristic function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380120.png" />. Then, when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380121.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380122.png" />,
+
In an unrestricted random walk, the position  $  S _ {n} $
 +
of the particle for large  $  n $
 +
is described by the [[Law of large numbers|law of large numbers]] and the [[Central limit theorem|central limit theorem]].
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380123.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
If the magnitudes  $  \pm  1 $
 +
of the jumps are changed to  $  \pm  \Delta $(
 +
for small  $  \Delta $)
 +
and if one assumes that  $  p = ( 1 + \alpha \Delta ) / 2 $,
 +
then the position  $  S _ {n} $
 +
of the particle after  $  n = t \Delta  ^ {-} 2 $
 +
steps will describe approximately (as  $  \Delta \rightarrow 0 $)
 +
the behaviour at time  $  t $
 +
of the [[Diffusion process|diffusion process]] with drift  $  \alpha $,
 +
diffusion coefficient 1, and corresponding behaviour on the boundaries  $  a $
 +
and  $  - b $(
 +
if these are specified for the random walk  $  S _ {n} $).
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380124.png" /></td> </tr></table>
+
The notion of a random walk as described above can be generalized in many ways. The simplest random walks in  $  \mathbf R  ^ {d} $
 +
for  $  d > 1 $
 +
are defined as follows. The particle leaves the origin and moves one step of length 1 in one of the  $  2 d $
 +
directions parallel to the coordinate axes. Thus, its possible positions are the points of  $  \mathbf R  ^ {d} $
 +
with integer coordinates. To define a random walk it is necessary to specify  $  2 d $
 +
probabilities corresponding to the different directions. One obtains a symmetric random walk if all these are equal to  $  1/2d $.
 +
In the multi-dimensional case, the problem of a random walk with boundaries becomes much more complicated, since the shape of the boundaries becomes essentially more complex when  $  d > 1 $.
 +
In the unrestricted case, Pólya's theorem holds (see [[#References|[1]]], Vol. 1): for a symmetric random walk when  $  d \leq  2 $,
 +
the particle will sooner or later return to its initial position with probability 1 (once, and thus infinitely often); but when  $  d= 3 $
 +
this probability is approximately equal to 0.35 (and when  $  d > 3 $
 +
it is still smaller).
  
This identity reveals the connection between boundary problems for random walks and those in the theory of functions of a complex variable, since the factors on the right-hand side of (4) are uniquely determined by those in the canonical factorization of the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380125.png" /> on the axis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380126.png" />, that is, the expansion of this function as a product <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380127.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380128.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380129.png" /> are analytic in the upper and lower half-planes, respectively, do not have zeros and are continuous there (including the boundary). In the identity (4), when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380130.png" /> one can replace the first factor by
+
Another possible generalization of the simplest random walk consists of taking arbitrarily distributed independent random variables  $  X _ {1} , X _ {2} ,\dots $
 +
in (1). In this case, the basic qualitative laws for unrestricted random walks and for random walks with boundaries are preserved. For example, the particle reaches one of the boundaries  $  a $
 +
or  $  - b $
 +
with probability 1. If  $  {\mathsf E} X _ {i} \leq  0 $
 +
and  $  a = \infty $,
 +
then the boundary  $  - b $
 +
will be reached with probability 1. If the $  X _ {i} $
 +
are integer valued and  $  {\mathsf E} X _ {i} = 0 $,  
 +
then the particle will return to the initial position with probability 1. For arbitrarily distributed  $  X _ {i} $
 +
with  $  {\mathsf E} X _ {i} = 0 $,  
 +
this statement only holds in case one considers return to an interval, not to a point.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380131.png" /></td> </tr></table>
+
The solution of problems concerned with the exit of random walks from an interval  $  ( - b , a ) $
 +
turns out to be much more difficult in the general case. At the same time, these problems have many applications in mathematical statistics (sequential analysis), in the insurance business, in queueing theory, etc. In their investigation, a decisive role is played by various functionals of a random walk  $  \{ S _ {n} \} $,
 +
$  n = 0 , 1 \dots $
 +
called boundary functionals. Among these are  $  S = \sup _ {k \geq  0 }  S _ {k} $,
 +
the time of the first passage through zero (in a positive direction),  $  Y _ {+} = \min \{ {k } : {S _ {k} > 0 } \} $,
 +
the time of the first passage through zero (in a negative direction),  $  Y _ {-} = \min \{ {k \geq  1 } : {S _ {k} \leq  0 } \} $,
 +
the first positive sum  $  \chi _ {+} = S _ {Y _ {+}  } $,
 +
and the first non-positive sum  $  \chi _ {-} = S _ {Y _ {-}  } $,
 +
etc. It turns out that the distribution of the jumps  $  X _ {i} $
 +
is connected with the distribution of these functionals by the following so-called factorization identity (see [[#References|[1]]], Vol. 1; [[#References|[2]]], [[#References|[3]]]; see also [[Factorization identities|Factorization identities]]). Let  $  \phi ( \lambda ) = {\mathsf E} e ^ {i \lambda X _ {1} } $
 +
be the characteristic function of  $  X _ {1} $.
 +
Then, when  $  | z | \leq  1 $
 +
and  $  \mathop{\rm Im}  \lambda = 0 $,
 +
 
 +
$$ \tag{4 }
 +
1 - z \phi ( \lambda )  = \
 +
[ 1 - {\mathsf E} ( e ^ {i \lambda \chi _ {+} }
 +
z ^ {Y _ {+} } ; Y _ {+} < \infty ) ] \times
 +
$$
 +
 
 +
$$
 +
\times
 +
[ 1 - {\mathsf E} ( e ^ {i \lambda \chi _ {-} }
 +
z ^ {Y _ {-} } ; Y _ {-} < \infty ) ] .
 +
$$
 +
 
 +
This identity reveals the connection between boundary problems for random walks and those in the theory of functions of a complex variable, since the factors on the right-hand side of (4) are uniquely determined by those in the canonical factorization of the function  $  1 - z \phi ( \lambda ) $
 +
on the axis  $  \mathop{\rm Im}  \lambda = 0 $,
 +
that is, the expansion of this function as a product  $  1 - z \phi ( \lambda ) = A _ {z _ {+}  } ( \lambda ) A _ {z _ {-}  } ( \lambda ) $
 +
for  $  \mathop{\rm Im}  \lambda = 0 $,
 +
where  $  A _ {z _  \pm  } ( \lambda ) $
 +
are analytic in the upper and lower half-planes, respectively, do not have zeros and are continuous there (including the boundary). In the identity (4), when  $  z = 1 $
 +
one can replace the first factor by
 +
 
 +
$$
 +
 
 +
\frac{1 - {\mathsf P} \{ Y _ {+} < \infty \} }{1 - {\mathsf E} e ^ {i \lambda S } }
 +
  = \
 +
1 - {\mathsf E} ( e ^ {i \lambda \chi _ {+} } ;  Y _ {+} < \infty ) .
 +
$$
  
 
Identity (4) is just one of a group of factorization identities connecting the distributions of various boundary functionals. Another one is related to the Pollaczek–Spitzer identity
 
Identity (4) is just one of a group of factorization identities connecting the distributions of various boundary functionals. Another one is related to the Pollaczek–Spitzer identity
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380132.png" /></td> </tr></table>
+
$$
 +
\sum _ { n= } 0 ^  \infty 
 +
z  ^ {n} {\mathsf E} e ^ {i \lambda \overline{S}\; _ {n} }  = \
 +
\mathop{\rm exp} \left \{ \sum _ { n= } 1 ^  \infty 
 +
 
 +
\frac{z  ^ {n} }{n}
 +
{\mathsf E}
 +
e ^ {i \lambda \max ( 0 , S _ {n} ) } \right \} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380133.png" />. Factorization identities provide a powerful method for studying boundary problems for random walks (see [[#References|[1]]], Vol. 2; [[#References|[2]]], [[#References|[3]]]).
+
where $  \overline{S}\; _ {n} = \max ( 0 , S _ {1} \dots S _ {n} ) $.  
 +
Factorization identities provide a powerful method for studying boundary problems for random walks (see [[#References|[1]]], Vol. 2; [[#References|[2]]], [[#References|[3]]]).
  
 
Boundary problems for random walks have been investigated rather completely, including their asymptotic analysis (see [[#References|[1]]], Vol. 2; [[#References|[4]]]–).
 
Boundary problems for random walks have been investigated rather completely, including their asymptotic analysis (see [[#References|[1]]], Vol. 2; [[#References|[4]]]–).
  
Analytically, the solution of boundary problems leads to integro-difference equations. For example, the probability <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380134.png" /> that a particle starting from a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380135.png" /> will leave this interval in time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380136.png" /> satisfies the following equation (the total probability formula at the first jump):
+
Analytically, the solution of boundary problems leads to integro-difference equations. For example, the probability $  u _ {n} ( x , a , b ) $
 +
that a particle starting from a point $  x \in ( - b , a ) $
 +
will leave this interval in time $  n $
 +
satisfies the following equation (the total probability formula at the first jump):
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380137.png" /></td> </tr></table>
+
$$
 +
u _ {n+} 1 ( x , a , b )  = \
 +
\int\limits _ { - } b- x ^ { a- }  x u _ {n} ( x + y , a , b )  d F ( y) + 1 +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380138.png" /></td> </tr></table>
+
$$
 +
- F ( a - x ) + F ( - b - x ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380139.png" />. By passing to the generating function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380140.png" /> one obtains the usual integral equations. There are at least two approaches to the investigation of asymptotic properties of solutions of these equations. One of them is based on the study of analytic properties of the double transformation
+
where $  F ( x) = {\mathsf P} \{ X _ {1} < x \} $.  
 +
By passing to the generating function $  u ( z , x , a , b ) = \sum _ {n=} 1  ^  \infty  z  ^ {n} u _ {n} ( x , a , b ) $
 +
one obtains the usual integral equations. There are at least two approaches to the investigation of asymptotic properties of solutions of these equations. One of them is based on the study of analytic properties of the double transformation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380141.png" /></td> </tr></table>
+
$$
 +
U ( z , \lambda , a , b )  = \
 +
\int\limits e ^ {i \lambda x } u ( z , x , a , b )  d x
 +
$$
  
 
and its subsequent inversion (see [[#References|[5]]]–). The other involves the methods of Vishik and Lyusternik for solving equations with a small parameter (see [[#References|[4]]] and [[Differential equations with small parameter|Differential equations with small parameter]]). The latter reveals profound connections between these problems and potential theory.
 
and its subsequent inversion (see [[#References|[5]]]–). The other involves the methods of Vishik and Lyusternik for solving equations with a small parameter (see [[#References|[4]]] and [[Differential equations with small parameter|Differential equations with small parameter]]). The latter reveals profound connections between these problems and potential theory.
  
Much of the above carries over to the case of random walks with dependent jumps, when the random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380142.png" /> are connected in a [[Markov chain|Markov chain]], and also to multi-dimensional random walks in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380143.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/r/r077/r077380/r077380144.png" /> (see , [[#References|[7]]]).
+
Much of the above carries over to the case of random walks with dependent jumps, when the random variables $  S _ {n} $
 +
are connected in a [[Markov chain|Markov chain]], and also to multi-dimensional random walks in $  \mathbf R  ^ {d} $,  
 +
$  d > 1 $(
 +
see , [[#References|[7]]]).
  
 
====References====
 
====References====

Revision as of 08:09, 6 June 2020


A stochastic process of special form that can be interpreted as a model describing the movement of a particle in a certain state space under the action of some random mechanism. The state space is usually a $ d $- dimensional Euclidean space or the integral lattice in it. The random mechanism can be of various kinds; the most common random walks are those generated by summation of independent random variables or by Markov chains. There is no precise generally-accepted definition of a random walk.

The trajectories of the simplest random walk in the case $ d = 1 $ are described by the initial position $ S _ {0} = 0 $ and the sequence of sums

$$ \tag{1 } S _ {n} = X _ {1} + \dots + X _ {n} ,\ n = 1 , 2 \dots $$

where the $ X _ {i} $ are independent and have a Bernoulli distribution:

$$ {\mathsf P} \{ X _ {i} = 1 \} = p ,\ \ {\mathsf P} \{ X _ {i} = - 1 \} = q = 1 - p ,\ p \in ( 0 , 1 ) . $$

The value of $ S _ {n} $ can be interpreted as the gain of one of two players after $ n $ games in each of which this player wins one dollar with probability $ p $ and loses it with probability $ 1 - p $. If the game consists of tossing an unbiased coin, then it is assumed that $ p = 1/2 $( a symmetric walk, see Bernoulli random walk). Under the assumption that the initial capital of the first player is $ b $ and that of the second player is $ a $, the game will finish when the moving particle (with coordinates $ S _ {1} , S _ {2} ,\dots $) first touches one of the levels $ a $ or $ - b $. At this moment, one of the players is ruined. This is the classical ruin problem, in which the boundaries at points $ a $ and $ - b $ can be regarded as absorbing.

In applications to queueing theory, the behaviour of the particle near the boundaries $ a $ and $ - b $ can differ: e.g., if $ a = \infty $ and $ b = 0 $, then the position $ Z _ {n+} 1 $ of the random particle at the moment $ n + 1 $ is given by

$$ \tag{2 } Z _ {n+} 1 = \max ( 0 , Z _ {n} + X _ {n+} 1 ) , $$

and the boundary at $ 0 $ can be called detaining or reflecting. There are also other possibilities for the behaviour of the particle in a neighbourhood of the boundaries.

If $ a = \infty $, then one obtains the problem of a random walk with one boundary. If $ a = b = \infty $, then one obtains an unrestricted random walk. Random walks are usually studied using the apparatus of discrete Markov chains and, in particular, by investigating the corresponding finite-difference equations. For example, in the ruin problem, let $ u _ {k} $ be the probability that the first player is ruined, given that his initial capital is equal to $ k $, $ 0 \leq k \leq a + b $, where the total capital of both players is fixed and equal to $ a + b $. Then, by the total probability formula (at the first jump), it follows that $ u _ {k} $ satisfies the equation

$$ u _ {k} = p u _ {k+} 1 + qu _ {k-} 1 ,\ 0 < k < a + b , $$

and the boundary conditions $ u _ {a+} b = 0 $, $ u _ {0} = 1 $. Thus one obtains

$$ u _ {b} = \frac{\left ( \frac{q}{p} \right ) ^ {a+} b - \left ( \frac{q}{p} \right ) ^ {b} }{\left ( \frac{q}{p} \right ) ^ {a+} b - 1 } \ \ \textrm{ when } p \neq q ; $$

$$ u _ {b} = \frac{a}{a+} b \ \textrm{ when } p = q = \frac{1}{2} . $$

The second of these formulas shows that even a "fair" game (in which both players have identical chances) leads to ruin with probability close to 1, provided that the capital $ a $ of the second player is large in comparison to $ b $( $ u _ {b} = 1 $ when $ b < \infty $, $ a = \infty $).

The ruin problem has been thoroughly investigated. For example, it was shown by J. Lagrange (see [1], Vol. 1) that the probability $ u _ {b,n} $ of ruin of the first player at the $ n $- th step, given that this initial capital is $ b $, where the total capital is $ a+ b $( fixed), is equal to

$$ u _ {b,n} = ( a + b ) ^ {-} 1 2 ^ {n} p ^ {( n- b )/ 2 } q ^ {( n+ b )/ 2 } \times $$

$$ \times \sum _ { k= } 1 ^ { a+ } b- 1 \cos ^ {n-} 1 \frac{\pi k }{a+} b \sin \frac{\pi k }{a+} b \sin \frac{\pi b k }{a+} b . $$

The mean time before ruin of one of the players, $ m _ {b} $, is given by

$$ m _ {b} = \frac{b}{q-} p - a+ \frac{b}{q-} p \frac{1 - \left ( \frac{q}{p} \right ) ^ {b} }{1 - \left ( \frac{q}{p} \right ) ^ {a+} b } \ \textrm{ if } \ p \neq q ; $$

$$ m _ {b} = a b \ \textrm{ if } p = q = \frac{1}{2} . $$

For random walks with one boundary $ ( a = \infty ) $, described by (2), there is a stationary distribution for the random walk when $ p < q $ and $ n \rightarrow \infty $, coinciding with the distribution of the random variable $ S = \sup _ {k \geq 0 } S _ {k} $ and

$$ \tag{3 } {\mathsf P} \{ S \geq k \} = \ \left ( \frac{p}{q} \right ) ^ {k} ,\ \ k = 0 , 1 ,\dots . $$

The laws describing an unrestricted random walk follow from theorems about the behaviour of the sequence of partial sums $ S _ {n} $, $ n = 1 , 2 ,\dots $. One of these laws confirms that for a symmetric random walk $ ( p = 1/2 ) $, the particle hits (infinitely often) any fixed point $ a $ with probability 1. When $ p < 1/2 $, the walk departs to the left with probability 1; in this case $ S < \infty $ with probability 1.

For a symmetric random walk, the time $ K _ {n} $ spent by the particle on the positive half-line (the number of positive terms in the sequence $ S _ {1} \dots S _ {n} $) will be closer to $ 0 $ or $ n $ than to $ n / 2 $ with a large probability. This is seen from the so-called arcsine law, which implies that for large $ n $( see [1], Vol. 1),

$$ {\mathsf P} \left \{ \frac{K _ n}{n} < x \right \} \approx \frac{2} \pi { \mathop{\rm arc} \sin } \sqrt x . $$

This implies, for example, that

$$ {\mathsf P} \left \{ \left | \frac{K _ {n} }{n} - \frac{1}{2} \right | < \frac{1}{4} \right \} = \ 1 - 2 {\mathsf P} \left \{ \frac{K _ {n} }{n} < \frac{1}{4} \right \} \approx \ 1 - \frac{4} \pi \frac \pi {6} = \frac{1}{3} $$

and that with probability $ 0 .2 $, the particle spends at least $ 97.6 \pct $ of the whole time on one side.

There are relations connecting random walks with boundaries with unrestricted random walks. For example, if one assumes that $ Y ( x) = \min \{ {k } : {S _ {k} \geq x } \} $, then (see [2])

$$ {\mathsf P} \{ Y ( x) = n \} = \ \frac{x}{n} {\mathsf P} \{ S _ {n} = x \} . $$

In an unrestricted random walk, the position $ S _ {n} $ of the particle for large $ n $ is described by the law of large numbers and the central limit theorem.

If the magnitudes $ \pm 1 $ of the jumps are changed to $ \pm \Delta $( for small $ \Delta $) and if one assumes that $ p = ( 1 + \alpha \Delta ) / 2 $, then the position $ S _ {n} $ of the particle after $ n = t \Delta ^ {-} 2 $ steps will describe approximately (as $ \Delta \rightarrow 0 $) the behaviour at time $ t $ of the diffusion process with drift $ \alpha $, diffusion coefficient 1, and corresponding behaviour on the boundaries $ a $ and $ - b $( if these are specified for the random walk $ S _ {n} $).

The notion of a random walk as described above can be generalized in many ways. The simplest random walks in $ \mathbf R ^ {d} $ for $ d > 1 $ are defined as follows. The particle leaves the origin and moves one step of length 1 in one of the $ 2 d $ directions parallel to the coordinate axes. Thus, its possible positions are the points of $ \mathbf R ^ {d} $ with integer coordinates. To define a random walk it is necessary to specify $ 2 d $ probabilities corresponding to the different directions. One obtains a symmetric random walk if all these are equal to $ 1/2d $. In the multi-dimensional case, the problem of a random walk with boundaries becomes much more complicated, since the shape of the boundaries becomes essentially more complex when $ d > 1 $. In the unrestricted case, Pólya's theorem holds (see [1], Vol. 1): for a symmetric random walk when $ d \leq 2 $, the particle will sooner or later return to its initial position with probability 1 (once, and thus infinitely often); but when $ d= 3 $ this probability is approximately equal to 0.35 (and when $ d > 3 $ it is still smaller).

Another possible generalization of the simplest random walk consists of taking arbitrarily distributed independent random variables $ X _ {1} , X _ {2} ,\dots $ in (1). In this case, the basic qualitative laws for unrestricted random walks and for random walks with boundaries are preserved. For example, the particle reaches one of the boundaries $ a $ or $ - b $ with probability 1. If $ {\mathsf E} X _ {i} \leq 0 $ and $ a = \infty $, then the boundary $ - b $ will be reached with probability 1. If the $ X _ {i} $ are integer valued and $ {\mathsf E} X _ {i} = 0 $, then the particle will return to the initial position with probability 1. For arbitrarily distributed $ X _ {i} $ with $ {\mathsf E} X _ {i} = 0 $, this statement only holds in case one considers return to an interval, not to a point.

The solution of problems concerned with the exit of random walks from an interval $ ( - b , a ) $ turns out to be much more difficult in the general case. At the same time, these problems have many applications in mathematical statistics (sequential analysis), in the insurance business, in queueing theory, etc. In their investigation, a decisive role is played by various functionals of a random walk $ \{ S _ {n} \} $, $ n = 0 , 1 \dots $ called boundary functionals. Among these are $ S = \sup _ {k \geq 0 } S _ {k} $, the time of the first passage through zero (in a positive direction), $ Y _ {+} = \min \{ {k } : {S _ {k} > 0 } \} $, the time of the first passage through zero (in a negative direction), $ Y _ {-} = \min \{ {k \geq 1 } : {S _ {k} \leq 0 } \} $, the first positive sum $ \chi _ {+} = S _ {Y _ {+} } $, and the first non-positive sum $ \chi _ {-} = S _ {Y _ {-} } $, etc. It turns out that the distribution of the jumps $ X _ {i} $ is connected with the distribution of these functionals by the following so-called factorization identity (see [1], Vol. 1; [2], [3]; see also Factorization identities). Let $ \phi ( \lambda ) = {\mathsf E} e ^ {i \lambda X _ {1} } $ be the characteristic function of $ X _ {1} $. Then, when $ | z | \leq 1 $ and $ \mathop{\rm Im} \lambda = 0 $,

$$ \tag{4 } 1 - z \phi ( \lambda ) = \ [ 1 - {\mathsf E} ( e ^ {i \lambda \chi _ {+} } z ^ {Y _ {+} } ; Y _ {+} < \infty ) ] \times $$

$$ \times [ 1 - {\mathsf E} ( e ^ {i \lambda \chi _ {-} } z ^ {Y _ {-} } ; Y _ {-} < \infty ) ] . $$

This identity reveals the connection between boundary problems for random walks and those in the theory of functions of a complex variable, since the factors on the right-hand side of (4) are uniquely determined by those in the canonical factorization of the function $ 1 - z \phi ( \lambda ) $ on the axis $ \mathop{\rm Im} \lambda = 0 $, that is, the expansion of this function as a product $ 1 - z \phi ( \lambda ) = A _ {z _ {+} } ( \lambda ) A _ {z _ {-} } ( \lambda ) $ for $ \mathop{\rm Im} \lambda = 0 $, where $ A _ {z _ \pm } ( \lambda ) $ are analytic in the upper and lower half-planes, respectively, do not have zeros and are continuous there (including the boundary). In the identity (4), when $ z = 1 $ one can replace the first factor by

$$ \frac{1 - {\mathsf P} \{ Y _ {+} < \infty \} }{1 - {\mathsf E} e ^ {i \lambda S } } = \ 1 - {\mathsf E} ( e ^ {i \lambda \chi _ {+} } ; Y _ {+} < \infty ) . $$

Identity (4) is just one of a group of factorization identities connecting the distributions of various boundary functionals. Another one is related to the Pollaczek–Spitzer identity

$$ \sum _ { n= } 0 ^ \infty z ^ {n} {\mathsf E} e ^ {i \lambda \overline{S}\; _ {n} } = \ \mathop{\rm exp} \left \{ \sum _ { n= } 1 ^ \infty \frac{z ^ {n} }{n} {\mathsf E} e ^ {i \lambda \max ( 0 , S _ {n} ) } \right \} , $$

where $ \overline{S}\; _ {n} = \max ( 0 , S _ {1} \dots S _ {n} ) $. Factorization identities provide a powerful method for studying boundary problems for random walks (see [1], Vol. 2; [2], [3]).

Boundary problems for random walks have been investigated rather completely, including their asymptotic analysis (see [1], Vol. 2; [4]–).

Analytically, the solution of boundary problems leads to integro-difference equations. For example, the probability $ u _ {n} ( x , a , b ) $ that a particle starting from a point $ x \in ( - b , a ) $ will leave this interval in time $ n $ satisfies the following equation (the total probability formula at the first jump):

$$ u _ {n+} 1 ( x , a , b ) = \ \int\limits _ { - } b- x ^ { a- } x u _ {n} ( x + y , a , b ) d F ( y) + 1 + $$

$$ - F ( a - x ) + F ( - b - x ) , $$

where $ F ( x) = {\mathsf P} \{ X _ {1} < x \} $. By passing to the generating function $ u ( z , x , a , b ) = \sum _ {n=} 1 ^ \infty z ^ {n} u _ {n} ( x , a , b ) $ one obtains the usual integral equations. There are at least two approaches to the investigation of asymptotic properties of solutions of these equations. One of them is based on the study of analytic properties of the double transformation

$$ U ( z , \lambda , a , b ) = \ \int\limits e ^ {i \lambda x } u ( z , x , a , b ) d x $$

and its subsequent inversion (see [5]–). The other involves the methods of Vishik and Lyusternik for solving equations with a small parameter (see [4] and Differential equations with small parameter). The latter reveals profound connections between these problems and potential theory.

Much of the above carries over to the case of random walks with dependent jumps, when the random variables $ S _ {n} $ are connected in a Markov chain, and also to multi-dimensional random walks in $ \mathbf R ^ {d} $, $ d > 1 $( see , [7]).

References

[1] W. Feller, "An introduction to probability theory and its applications", 1–2, Wiley (1950–1966)
[2] A.A. Borovkov, "Wahrscheinlichkeitstheorie", Birkhäuser (1976) (Translated from Russian)
[3] A.A. Borovkov, "Stochastic processes in queueing theory", Springer (1976) (Translated from Russian)
[4] V.S. Korolyuk, Yu.V. Borouvskikh, "Analytic problems in the asymptotics of probability distributions", Kishinev (1981) (In Russian)
[5] A.A. Borovkov, "New limit theorems in boundary value problems for sums of independent terms" Sibirsk. Mat. Zh., 3 : 5 (1962) pp. 645–694 (In Russian)
[6a] V.I. Lotov, "Asymptotic analysis of distributions in problems with two boundaries I" Theory Probab. Appl., 24 (1980) pp. 480–491 Teor. Veroyatn. i Primenen., 24 : 3 (1979) pp. 475–485
[6b] V.I. Lotov, "Asymptotic analysis of distributions in problems with two boundaries II" Theory Probab. Appl., 24 (1980) pp. 869–876 Teor. Veroyatn. i Primenen., 24 : 4 (1979) pp. 873–879
[7] F. Spitzer, "Principles of random walk", v. Nostrand (1964)

Comments

For applications to the physical and biological sciences see [a7] and the references quoted therein.

References

[a1] M.N. Barber, B.W. Ninham, "Random and restricted walks, theory and applications" , Gordon & Breach (1970)
[a2] K.L. Chung, "Markov chains with stationary transition probabilities" , Springer (1960)
[a3] A. Gut, "Stopped random walks. Limit theorems and applications" , Springer (1988)
[a4] J.G. Kemeny, J.L. Snell, A.W. Knapp, "Denumerable Markov chains" , Springer (1976)
[a5] J.H.B. Kemperman, "The passage problem for a stationary Markov chain" , Univ. Chicago Press (1961)
[a6] P. Lévy, "Processus stochastiques et mouvement Brownien" , Gauthier-Villars (1965)
[a7] E.W. Montroll, B.J. West, "On an enriched collection of stochastic processes" E.W. Montroll (ed.) J.L. Lebowitz (ed.) , Fluctuation Phenomena , Series in Statistical Mechanics , 7 , Elsevier (1987) pp. Chapt. 2
[a8] D. Revuz, "Markov chains" , North-Holland (1975)
How to Cite This Entry:
Random walk. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Random_walk&oldid=25944
This article was adapted from an original article by A.A. Borovkov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article