Namespaces
Variants
Actions

Difference between revisions of "Characteristic function"

From Encyclopedia of Mathematics
Jump to: navigation, search
(→‎References: Feller: internal link)
m (tex encoded by computer)
(2 intermediate revisions by 2 users not shown)
Line 1: Line 1:
 +
<!--
 +
c0216501.png
 +
$#A+1 = 71 n = 0
 +
$#C+1 = 71 : ~/encyclopedia/old_files/data/C021/C.0201650 Characteristic function,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
{{MSC|60E10}}
 
{{MSC|60E10}}
  
 
[[Category:Distribution theory]]
 
[[Category:Distribution theory]]
  
''Fourier–Stieltjes transform, of a probability measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c0216501.png" />''
+
''Fourier–Stieltjes transform, of a probability measure $  \mu $''
  
The complex-valued function given on the entire axis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c0216502.png" /> by the formula
+
The complex-valued function given on the entire axis $  \mathbf R  ^ {1} $
 +
by the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c0216503.png" /></td> </tr></table>
+
$$
 +
\widehat \mu  ( t)  = \
 +
\int\limits _ {- \infty } ^  \infty 
 +
e  ^ {itx}  d \mu ( x),\ \
 +
t \in \mathbf R  ^ {1} .
 +
$$
  
The characteristic function of a random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c0216504.png" /> is, by definition, that of its probability distribution
+
The characteristic function of a random variable $  X $
 +
is, by definition, that of its probability distribution
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c0216505.png" /></td> </tr></table>
+
$$
 +
\mu _ {X} ( B)  = \
 +
{\mathsf P} \{ X \in B \} ,\ \
 +
B \subset  \mathbf R  ^ {1} .
 +
$$
  
 
A method connected with the use of characteristic functions was first applied by A.M. Lyapunov and later became one of the basic analytical methods in probability theory. It is used most effectively in proving limit theorems of probability theory. For example, the proof of the central limit theorem for independent identically-distributed random variables with second moments reduces to the elementary relation
 
A method connected with the use of characteristic functions was first applied by A.M. Lyapunov and later became one of the basic analytical methods in probability theory. It is used most effectively in proving limit theorems of probability theory. For example, the proof of the central limit theorem for independent identically-distributed random variables with second moments reduces to the elementary relation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c0216506.png" /></td> </tr></table>
+
$$
 +
\left ( 1 -
 +
 
 +
\frac{t  ^ {2} }{2n }
 +
+ o
 +
\left ( {
 +
\frac{1}{n}
 +
} \right ) \
 +
\right )  ^ {n}  \rightarrow  \mathop{\rm exp}
 +
\left ( -
 +
\frac{t  ^ {2} }{2}
 +
 
 +
\right ) .
 +
$$
  
 
===Basic properties of characteristic functions.===
 
===Basic properties of characteristic functions.===
  
 +
1)  $  \widehat \mu  ( 0) = 1 $
 +
and  $  \widehat \mu  $
 +
is positive definite, i.e.
 +
 +
$$
 +
\sum \alpha _ {k} \overline \alpha \; _ {l} \widehat \mu  ( t _ {k} - t _ {l} )  \geq  0
 +
$$
 +
 +
for any finite sets of complex numbers  $  \alpha _ {k} $
 +
and arguments  $  t _ {k} \in \mathbf R  ^ {1} $;
 +
 +
2)  $  \widehat \mu  $
 +
is uniformly continuous on the entire axis  $  \mathbf R  ^ {1} $;
  
1) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c0216507.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c0216508.png" /> is positive definite, i.e.
+
3)  $  | \widehat \mu  ( t) | \leq  1 $,
 +
$  | \widehat \mu  ( t _ {1} ) - \widehat \mu  ( t _ {2} ) |  ^ {2} \leq  2 ( 1 -  \mathop{\rm Re}  \widehat \mu  ( t _ {1} - t _ {2} )) $,
 +
$  t, t _ {1,\ } t _ {2} \in \mathbf R  ^ {1} $.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c0216509.png" /></td> </tr></table>
+
4)  $  \overline{ {\widehat \mu  ( t) }}\; = \widehat \mu  (- t) $;  
 +
in particular,  $  \widehat \mu  $
 +
takes only real values (and is an even function) if and only if the corresponding probability distribution is symmetric, i.e.  $  \mu ( B) = \mu (- B) $,
 +
where  $  - B = \{ {x } : {- x \in B } \} $.
  
for any finite sets of complex numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165010.png" /> and arguments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165011.png" />;
+
5) The characteristic function determines the measure uniquely; the inversion formula
  
2) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165012.png" /> is uniformly continuous on the entire axis <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165013.png" />;
+
$$
 +
\mu ( a, b) = \
 +
\lim\limits _ {T \rightarrow \infty } \
  
3) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165014.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165015.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165016.png" />.
+
\frac{1}{2 \pi }
  
4) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165017.png" />; in particular, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165018.png" /> takes only real values (and is an even function) if and only if the corresponding probability distribution is symmetric, i.e. <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165019.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165020.png" />.
+
\int\limits _ { - } T ^ { T }
  
5) The characteristic function determines the measure uniquely; the inversion formula
+
\frac{e  ^ {-} iat - e  ^ {-} ibt }{it}
 +
 
 +
\widehat \mu  ( t) dt
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165021.png" /></td> </tr></table>
+
is valid for any interval  $  ( a, b) $
 +
for which the end points  $  a < b $
 +
are continuity points of  $  \mu $.  
 +
If  $  \widehat \mu  $
 +
is integrable (absolutely if the integral is understood in the sense of Riemann) on  $  \mathbf R  ^ {1} $,
 +
then the corresponding distribution function has a density  $  p $
 +
and
  
is valid for any interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165022.png" /> for which the end points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165023.png" /> are continuity points of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165024.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165025.png" /> is integrable (absolutely if the integral is understood in the sense of Riemann) on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165026.png" />, then the corresponding distribution function has a density <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165027.png" /> and
+
$$
 +
p ( x) =
 +
\frac{1}{2 \pi }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165028.png" /></td> </tr></table>
+
\int\limits _ {- \infty } ^  \infty 
 +
e  ^ {-} itx \widehat \mu  ( t)  dt,\ \
 +
x \in \mathbf R  ^ {1} .
 +
$$
  
 
6) The characteristic function of the convolution of two probability measures (of the sum of two independent random variables) is the product of their characteristic functions.
 
6) The characteristic function of the convolution of two probability measures (of the sum of two independent random variables) is the product of their characteristic functions.
Line 44: Line 114:
 
The following three properties express the connection between the existence of moments of a random variable and the order of smoothness of its characteristic function.
 
The following three properties express the connection between the existence of moments of a random variable and the order of smoothness of its characteristic function.
  
7) If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165029.png" /> for some natural number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165030.png" />, then for all natural numbers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165031.png" /> the derivative of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165032.png" /> of the characteristic function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165033.png" /> of the random variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165034.png" /> exists and satisfies the equation
+
7) If $  {\mathsf E} | X |  ^ {n} < \infty $
 +
for some natural number $  n $,  
 +
then for all natural numbers $  r \leq  n $
 +
the derivative of order $  r $
 +
of the characteristic function $  \widehat \mu  _ {X} $
 +
of the random variable $  X $
 +
exists and satisfies the equation
 +
 
 +
$$
 +
\widehat \mu  {} _ {X}  ^ {(} r) ( t)  = \
 +
\int\limits _ {- \infty } ^  \infty 
 +
( ix)  ^ {r} e  ^ {itx} \
 +
d \mu _ {X} ( x),\ \
 +
t \in \mathbf R  ^ {1} .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165035.png" /></td> </tr></table>
+
Hence  $  {\mathsf E} X  ^ {r} = i  ^ {-} r \widehat \mu  {} _ {X}  ^ {(} r) ( 0) $,
 +
$  r \leq  n $.
  
Hence <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165036.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165037.png" />.
+
8) If  $  \widehat \mu  {} _ {X}  ^ {(} 2n) ( 0) $
 +
exists, then  $  {\mathsf E} X  ^ {2n} < \infty $;
  
8) If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165038.png" /> exists, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165039.png" />;
+
9) If $  {\mathsf E} | X |  ^ {n} < \infty $
 +
for all  $  n $
 +
and if
  
9) If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165040.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165041.png" /> and if
+
$$
 +
\overline{\lim\limits}\; \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165042.png" /></td> </tr></table>
+
\frac{( {\mathsf E} | X |  ^ {n} )  ^ {1/n} }{n}
  
then for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165043.png" />,
+
= {
 +
\frac{1}{R}
 +
} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165044.png" /></td> </tr></table>
+
then for all  $  | t | \leq  R $,
 +
 
 +
$$
 +
\widehat \mu  _ {X} ( t)  = \
 +
\sum _ {k = 0 } ^  \infty 
 +
 
 +
\frac{( it)  ^ {k} }{k! }
 +
 
 +
{\mathsf E} | X |  ^ {k} .
 +
$$
  
 
The use of the method of characteristic functions is based mainly on the properties of characteristic functions indicated above and also on the following two theorems.
 
The use of the method of characteristic functions is based mainly on the properties of characteristic functions indicated above and also on the following two theorems.
  
Bochner's theorem (description of the class of characteristic functions). Suppose that a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165045.png" /> is given on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165046.png" /> and that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165047.png" />. For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165048.png" /> to be the characteristic function of some probability measure it is necessary and sufficient that it be continuous and positive definite.
+
Bochner's theorem (description of the class of characteristic functions). Suppose that a function $  f $
 +
is given on $  \mathbf R  ^ {1} $
 +
and that $  f ( 0) = 1 $.  
 +
For $  f $
 +
to be the characteristic function of some probability measure it is necessary and sufficient that it be continuous and positive definite.
  
Lévy's theorem (continuity of the correspondence). Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165049.png" /> be a sequence of probability measures and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165050.png" /> be the sequence of their characteristic functions. Then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165051.png" /> weakly converges to some probability measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165052.png" /> (that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165053.png" /> for an arbitrary continuous bounded function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165054.png" />) if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165055.png" /> converges at every point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165056.png" /> to some continuous function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165057.png" />; in the case of convergence, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165058.png" />. This implies that the relative compactness (in the sense of weak convergence) of a family of probability measures is equivalent to the equicontinuity at zero of the family of corresponding characteristic functions.
+
Lévy's theorem (continuity of the correspondence). Let $  \{ \mu _ {n} \} $
 +
be a sequence of probability measures and let $  \{ \widehat \mu  _ {n} \} $
 +
be the sequence of their characteristic functions. Then $  \{ \mu _ {n} \} $
 +
weakly converges to some probability measure $  \mu $(
 +
that is, $  \int \phi  d \mu _ {n} \rightarrow \int \phi  d \mu $
 +
for an arbitrary continuous bounded function $  \phi $)  
 +
if and only if $  \{ \widehat \mu  _ {n} ( t) \} $
 +
converges at every point $  t \in \mathbf R  ^ {1} $
 +
to some continuous function $  f $;  
 +
in the case of convergence, $  f = \widehat \mu  $.  
 +
This implies that the relative compactness (in the sense of weak convergence) of a family of probability measures is equivalent to the equicontinuity at zero of the family of corresponding characteristic functions.
  
Bochner's theorem makes it possible to view the Fourier–Stieltjes transform as an isomorphism between the semi-group (under the operation of convolution) of probability measures on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165059.png" /> and the semi-group (under pointwise multiplication) of positive-definite continuous functions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165060.png" /> that have at zero the value one. Lévy's theorem asserts that this algebraic isomorphism is also a topological homeomorphism if in the semi-group of probability measures one has in mind the topology of weak convergence, and in the semi-group of positive-definite functions the topology of uniform convergence on bounded sets.
+
Bochner's theorem makes it possible to view the Fourier–Stieltjes transform as an isomorphism between the semi-group (under the operation of convolution) of probability measures on $  \mathbf R  ^ {1} $
 +
and the semi-group (under [[pointwise multiplication]]) of positive-definite continuous functions on $  \mathbf R  ^ {1} $
 +
that have at zero the value one. Lévy's theorem asserts that this algebraic isomorphism is also a topological homeomorphism if in the semi-group of probability measures one has in mind the topology of weak convergence, and in the semi-group of positive-definite functions the topology of uniform convergence on bounded sets.
  
Expressions are known for the characteristic functions of the basic probability measures (see [[#References|[1]]], [[#References|[2]]]). For example, the characteristic function of the Gaussian measure with mean <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165061.png" /> and variance <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165062.png" /> is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165063.png" />.
+
Expressions are known for the characteristic functions of the basic probability measures (see {{Cite|Lu}}, {{Cite|F}}). For example, the characteristic function of the Gaussian measure with mean $  m $
 +
and variance $  \sigma  ^ {2} $
 +
is $  \mathop{\rm exp} ( imt - \sigma  ^ {2} t  ^ {2} /2 ) $.
  
For non-negative integer-valued random variables <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165064.png" /> one uses, apart from the characteristic function, also its analogue: the generating function
+
For non-negative integer-valued random variables $  X $
 +
one uses, apart from the characteristic function, also its analogue: the generating function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165065.png" /></td> </tr></table>
+
$$
 +
\Phi _ {X} ( z)  = \
 +
\sum _ {k = 0 } ^  \infty 
 +
z  ^ {k} {\mathsf P} \{ X = k \} ,
 +
$$
  
which is connected with the characteristic function by the relation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165066.png" />.
+
which is connected with the characteristic function by the relation $  \widehat \mu  _ {X} ( t) = \Phi _ {X} ( e  ^ {it} ) $.
  
The characteristic function of a probability measure <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165067.png" /> on a finite-dimensional space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165068.png" /> is defined similarly:
+
The characteristic function of a probability measure $  \mu $
 +
on a finite-dimensional space $  \mathbf R  ^ {n} $
 +
is defined similarly:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165069.png" /></td> </tr></table>
+
$$
 +
\widehat \mu  ( t)  = \
 +
\int\limits _ {\mathbf R  ^ {n} }
 +
e ^ {i \langle  t, x\rangle }  d \mu ( x),\ \
 +
t \in \mathbf R  ^ {n} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165070.png" /> denotes the scalar product. The facts stated above are also valid for characteristic functions of probability measures on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/c/c021/c021650/c02165071.png" />.
+
where $  \langle  t, x\rangle $
 +
denotes the scalar product. The facts stated above are also valid for characteristic functions of probability measures on $  \mathbf R  ^ {n} $.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> E. Lukacs, "Characteristic functions", Griffin (1970) {{MR|0346874}} {{MR|0259980}} {{ZBL|0201.20404}} {{ZBL|0198.23804}} </TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its  applications"]], '''2''', Wiley (1971) </TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> Yu.V. [Yu.V. Prokhorov] Prohorov, Yu.A. Rozanov, "Probability theory, basic concepts. Limit theorems, random processes", Springer (1969) (Translated from Russian) {{MR|0251754}} {{ZBL|}} </TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> V.M. Zolotarev, "One-dimensional stable distributions", Amer. Math. Soc. (1986) (Translated from Russian) {{MR|0854867}} {{ZBL|0589.60015}} </TD></TR></table>
+
{|
 +
|valign="top"|{{Ref|Lu}}|| E. Lukacs, "Characteristic functions", Griffin (1970) {{MR|0346874}} {{MR|0259980}} {{ZBL|0201.20404}} {{ZBL|0198.23804}}
 +
|-
 +
|valign="top"|{{Ref|F}}|| W. Feller, [[Feller, "An introduction to probability theory and its  applications"|"An introduction to probability theory and its  applications"]], '''2''', Wiley (1971)
 +
|-
 +
|valign="top"|{{Ref|PR}}|| Yu.V. [Yu.V. Prokhorov] Prohorov, Yu.A. Rozanov, "Probability theory, basic concepts. Limit theorems, random processes", Springer (1969) (Translated from Russian) {{MR|0251754}} {{ZBL|}}
 +
|-
 +
|valign="top"|{{Ref|Z}}|| V.M. Zolotarev, "One-dimensional stable distributions", Amer. Math. Soc. (1986) (Translated from Russian) {{MR|0854867}} {{ZBL|0589.60015}}
 +
|}
  
 
====Comments====
 
====Comments====
 
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top"> M. Loève, "Probability theory" , '''1–2''' , Springer (1977–1978) {{MR|0651018}} {{MR|0651017}} {{MR|0203748}} {{MR|0140129}} {{MR|0123342}} {{MR|0066573}} {{MR|0036467}} {{ZBL|0385.60001}} {{ZBL|0359.60001}} {{ZBL|0108.14202}} {{ZBL|0095.12201}} {{ZBL|0066.10903}} {{ZBL|0039.13502}} </TD></TR></table>
+
{|
 +
|valign="top"|{{Ref|Lo}}|| M. Loève, "Probability theory" , '''1–2''' , Springer (1977–1978) {{MR|0651018}} {{MR|0651017}} {{MR|0203748}} {{MR|0140129}} {{MR|0123342}} {{MR|0066573}} {{MR|0036467}} {{ZBL|0385.60001}} {{ZBL|0359.60001}} {{ZBL|0108.14202}} {{ZBL|0095.12201}} {{ZBL|0066.10903}} {{ZBL|0039.13502}}
 +
|}

Revision as of 16:43, 4 June 2020


2020 Mathematics Subject Classification: Primary: 60E10 [MSN][ZBL]

Fourier–Stieltjes transform, of a probability measure $ \mu $

The complex-valued function given on the entire axis $ \mathbf R ^ {1} $ by the formula

$$ \widehat \mu ( t) = \ \int\limits _ {- \infty } ^ \infty e ^ {itx} d \mu ( x),\ \ t \in \mathbf R ^ {1} . $$

The characteristic function of a random variable $ X $ is, by definition, that of its probability distribution

$$ \mu _ {X} ( B) = \ {\mathsf P} \{ X \in B \} ,\ \ B \subset \mathbf R ^ {1} . $$

A method connected with the use of characteristic functions was first applied by A.M. Lyapunov and later became one of the basic analytical methods in probability theory. It is used most effectively in proving limit theorems of probability theory. For example, the proof of the central limit theorem for independent identically-distributed random variables with second moments reduces to the elementary relation

$$ \left ( 1 - \frac{t ^ {2} }{2n } + o \left ( { \frac{1}{n} } \right ) \ \right ) ^ {n} \rightarrow \mathop{\rm exp} \left ( - \frac{t ^ {2} }{2} \right ) . $$

Basic properties of characteristic functions.

1) $ \widehat \mu ( 0) = 1 $ and $ \widehat \mu $ is positive definite, i.e.

$$ \sum \alpha _ {k} \overline \alpha \; _ {l} \widehat \mu ( t _ {k} - t _ {l} ) \geq 0 $$

for any finite sets of complex numbers $ \alpha _ {k} $ and arguments $ t _ {k} \in \mathbf R ^ {1} $;

2) $ \widehat \mu $ is uniformly continuous on the entire axis $ \mathbf R ^ {1} $;

3) $ | \widehat \mu ( t) | \leq 1 $, $ | \widehat \mu ( t _ {1} ) - \widehat \mu ( t _ {2} ) | ^ {2} \leq 2 ( 1 - \mathop{\rm Re} \widehat \mu ( t _ {1} - t _ {2} )) $, $ t, t _ {1,\ } t _ {2} \in \mathbf R ^ {1} $.

4) $ \overline{ {\widehat \mu ( t) }}\; = \widehat \mu (- t) $; in particular, $ \widehat \mu $ takes only real values (and is an even function) if and only if the corresponding probability distribution is symmetric, i.e. $ \mu ( B) = \mu (- B) $, where $ - B = \{ {x } : {- x \in B } \} $.

5) The characteristic function determines the measure uniquely; the inversion formula

$$ \mu ( a, b) = \ \lim\limits _ {T \rightarrow \infty } \ \frac{1}{2 \pi } \int\limits _ { - } T ^ { T } \frac{e ^ {-} iat - e ^ {-} ibt }{it} \widehat \mu ( t) dt $$

is valid for any interval $ ( a, b) $ for which the end points $ a < b $ are continuity points of $ \mu $. If $ \widehat \mu $ is integrable (absolutely if the integral is understood in the sense of Riemann) on $ \mathbf R ^ {1} $, then the corresponding distribution function has a density $ p $ and

$$ p ( x) = \frac{1}{2 \pi } \int\limits _ {- \infty } ^ \infty e ^ {-} itx \widehat \mu ( t) dt,\ \ x \in \mathbf R ^ {1} . $$

6) The characteristic function of the convolution of two probability measures (of the sum of two independent random variables) is the product of their characteristic functions.

The following three properties express the connection between the existence of moments of a random variable and the order of smoothness of its characteristic function.

7) If $ {\mathsf E} | X | ^ {n} < \infty $ for some natural number $ n $, then for all natural numbers $ r \leq n $ the derivative of order $ r $ of the characteristic function $ \widehat \mu _ {X} $ of the random variable $ X $ exists and satisfies the equation

$$ \widehat \mu {} _ {X} ^ {(} r) ( t) = \ \int\limits _ {- \infty } ^ \infty ( ix) ^ {r} e ^ {itx} \ d \mu _ {X} ( x),\ \ t \in \mathbf R ^ {1} . $$

Hence $ {\mathsf E} X ^ {r} = i ^ {-} r \widehat \mu {} _ {X} ^ {(} r) ( 0) $, $ r \leq n $.

8) If $ \widehat \mu {} _ {X} ^ {(} 2n) ( 0) $ exists, then $ {\mathsf E} X ^ {2n} < \infty $;

9) If $ {\mathsf E} | X | ^ {n} < \infty $ for all $ n $ and if

$$ \overline{\lim\limits}\; \ \frac{( {\mathsf E} | X | ^ {n} ) ^ {1/n} }{n} = { \frac{1}{R} } , $$

then for all $ | t | \leq R $,

$$ \widehat \mu _ {X} ( t) = \ \sum _ {k = 0 } ^ \infty \frac{( it) ^ {k} }{k! } {\mathsf E} | X | ^ {k} . $$

The use of the method of characteristic functions is based mainly on the properties of characteristic functions indicated above and also on the following two theorems.

Bochner's theorem (description of the class of characteristic functions). Suppose that a function $ f $ is given on $ \mathbf R ^ {1} $ and that $ f ( 0) = 1 $. For $ f $ to be the characteristic function of some probability measure it is necessary and sufficient that it be continuous and positive definite.

Lévy's theorem (continuity of the correspondence). Let $ \{ \mu _ {n} \} $ be a sequence of probability measures and let $ \{ \widehat \mu _ {n} \} $ be the sequence of their characteristic functions. Then $ \{ \mu _ {n} \} $ weakly converges to some probability measure $ \mu $( that is, $ \int \phi d \mu _ {n} \rightarrow \int \phi d \mu $ for an arbitrary continuous bounded function $ \phi $) if and only if $ \{ \widehat \mu _ {n} ( t) \} $ converges at every point $ t \in \mathbf R ^ {1} $ to some continuous function $ f $; in the case of convergence, $ f = \widehat \mu $. This implies that the relative compactness (in the sense of weak convergence) of a family of probability measures is equivalent to the equicontinuity at zero of the family of corresponding characteristic functions.

Bochner's theorem makes it possible to view the Fourier–Stieltjes transform as an isomorphism between the semi-group (under the operation of convolution) of probability measures on $ \mathbf R ^ {1} $ and the semi-group (under pointwise multiplication) of positive-definite continuous functions on $ \mathbf R ^ {1} $ that have at zero the value one. Lévy's theorem asserts that this algebraic isomorphism is also a topological homeomorphism if in the semi-group of probability measures one has in mind the topology of weak convergence, and in the semi-group of positive-definite functions the topology of uniform convergence on bounded sets.

Expressions are known for the characteristic functions of the basic probability measures (see [Lu], [F]). For example, the characteristic function of the Gaussian measure with mean $ m $ and variance $ \sigma ^ {2} $ is $ \mathop{\rm exp} ( imt - \sigma ^ {2} t ^ {2} /2 ) $.

For non-negative integer-valued random variables $ X $ one uses, apart from the characteristic function, also its analogue: the generating function

$$ \Phi _ {X} ( z) = \ \sum _ {k = 0 } ^ \infty z ^ {k} {\mathsf P} \{ X = k \} , $$

which is connected with the characteristic function by the relation $ \widehat \mu _ {X} ( t) = \Phi _ {X} ( e ^ {it} ) $.

The characteristic function of a probability measure $ \mu $ on a finite-dimensional space $ \mathbf R ^ {n} $ is defined similarly:

$$ \widehat \mu ( t) = \ \int\limits _ {\mathbf R ^ {n} } e ^ {i \langle t, x\rangle } d \mu ( x),\ \ t \in \mathbf R ^ {n} , $$

where $ \langle t, x\rangle $ denotes the scalar product. The facts stated above are also valid for characteristic functions of probability measures on $ \mathbf R ^ {n} $.

References

[Lu] E. Lukacs, "Characteristic functions", Griffin (1970) MR0346874 MR0259980 Zbl 0201.20404 Zbl 0198.23804
[F] W. Feller, "An introduction to probability theory and its applications", 2, Wiley (1971)
[PR] Yu.V. [Yu.V. Prokhorov] Prohorov, Yu.A. Rozanov, "Probability theory, basic concepts. Limit theorems, random processes", Springer (1969) (Translated from Russian) MR0251754
[Z] V.M. Zolotarev, "One-dimensional stable distributions", Amer. Math. Soc. (1986) (Translated from Russian) MR0854867 Zbl 0589.60015

Comments

References

[Lo] M. Loève, "Probability theory" , 1–2 , Springer (1977–1978) MR0651018 MR0651017 MR0203748 MR0140129 MR0123342 MR0066573 MR0036467 Zbl 0385.60001 Zbl 0359.60001 Zbl 0108.14202 Zbl 0095.12201 Zbl 0066.10903 Zbl 0039.13502
How to Cite This Entry:
Characteristic function. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Characteristic_function&oldid=25940
This article was adapted from an original article by N.N. Vakhania (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article