# Characteristic function

2010 Mathematics Subject Classification: Primary: 60E10 [MSN][ZBL]

Fourier–Stieltjes transform, of a probability measure $\mu$

The complex-valued function given on the entire axis $\mathbf R ^ {1}$ by the formula

$$\widehat \mu ( t) = \ \int\limits _ {- \infty } ^ \infty e ^ {itx} d \mu ( x),\ \ t \in \mathbf R ^ {1} .$$

The characteristic function of a random variable $X$ is, by definition, that of its probability distribution

$$\mu _ {X} ( B) = \ {\mathsf P} \{ X \in B \} ,\ \ B \subset \mathbf R ^ {1} .$$

A method connected with the use of characteristic functions was first applied by A.M. Lyapunov and later became one of the basic analytical methods in probability theory. It is used most effectively in proving limit theorems of probability theory. For example, the proof of the central limit theorem for independent identically-distributed random variables with second moments reduces to the elementary relation

$$\left ( 1 - \frac{t ^ {2} }{2n } + o \left ( { \frac{1}{n} } \right ) \ \right ) ^ {n} \rightarrow \mathop{\rm exp} \left ( - \frac{t ^ {2} }{2} \right ) .$$

## Contents

### Basic properties of characteristic functions.

1) $\widehat \mu ( 0) = 1$ and $\widehat \mu$ is positive definite, i.e.

$$\sum \alpha _ {k} \overline \alpha \; _ {l} \widehat \mu ( t _ {k} - t _ {l} ) \geq 0$$

for any finite sets of complex numbers $\alpha _ {k}$ and arguments $t _ {k} \in \mathbf R ^ {1}$;

2) $\widehat \mu$ is uniformly continuous on the entire axis $\mathbf R ^ {1}$;

3) $| \widehat \mu ( t) | \leq 1$, $| \widehat \mu ( t _ {1} ) - \widehat \mu ( t _ {2} ) | ^ {2} \leq 2 ( 1 - \mathop{\rm Re} \widehat \mu ( t _ {1} - t _ {2} ))$, $t, t _ {1,\ } t _ {2} \in \mathbf R ^ {1}$.

4) $\overline{ {\widehat \mu ( t) }}\; = \widehat \mu (- t)$; in particular, $\widehat \mu$ takes only real values (and is an even function) if and only if the corresponding probability distribution is symmetric, i.e. $\mu ( B) = \mu (- B)$, where $- B = \{ {x } : {- x \in B } \}$.

5) The characteristic function determines the measure uniquely; the inversion formula

$$\mu ( a, b) = \ \lim\limits _ {T \rightarrow \infty } \ \frac{1}{2 \pi } \int\limits _ { - } T ^ { T } \frac{e ^ {-} iat - e ^ {-} ibt }{it} \widehat \mu ( t) dt$$

is valid for any interval $( a, b)$ for which the end points $a < b$ are continuity points of $\mu$. If $\widehat \mu$ is integrable (absolutely if the integral is understood in the sense of Riemann) on $\mathbf R ^ {1}$, then the corresponding distribution function has a density $p$ and

$$p ( x) = \frac{1}{2 \pi } \int\limits _ {- \infty } ^ \infty e ^ {-} itx \widehat \mu ( t) dt,\ \ x \in \mathbf R ^ {1} .$$

6) The characteristic function of the convolution of two probability measures (of the sum of two independent random variables) is the product of their characteristic functions.

The following three properties express the connection between the existence of moments of a random variable and the order of smoothness of its characteristic function.

7) If ${\mathsf E} | X | ^ {n} < \infty$ for some natural number $n$, then for all natural numbers $r \leq n$ the derivative of order $r$ of the characteristic function $\widehat \mu _ {X}$ of the random variable $X$ exists and satisfies the equation

$$\widehat \mu {} _ {X} ^ {(} r) ( t) = \ \int\limits _ {- \infty } ^ \infty ( ix) ^ {r} e ^ {itx} \ d \mu _ {X} ( x),\ \ t \in \mathbf R ^ {1} .$$

Hence ${\mathsf E} X ^ {r} = i ^ {-} r \widehat \mu {} _ {X} ^ {(} r) ( 0)$, $r \leq n$.

8) If $\widehat \mu {} _ {X} ^ {(} 2n) ( 0)$ exists, then ${\mathsf E} X ^ {2n} < \infty$;

9) If ${\mathsf E} | X | ^ {n} < \infty$ for all $n$ and if

$$\overline{\lim\limits}\; \ \frac{( {\mathsf E} | X | ^ {n} ) ^ {1/n} }{n} = { \frac{1}{R} } ,$$

then for all $| t | \leq R$,

$$\widehat \mu _ {X} ( t) = \ \sum _ {k = 0 } ^ \infty \frac{( it) ^ {k} }{k! } {\mathsf E} | X | ^ {k} .$$

The use of the method of characteristic functions is based mainly on the properties of characteristic functions indicated above and also on the following two theorems.

Bochner's theorem (description of the class of characteristic functions). Suppose that a function $f$ is given on $\mathbf R ^ {1}$ and that $f ( 0) = 1$. For $f$ to be the characteristic function of some probability measure it is necessary and sufficient that it be continuous and positive definite.

Lévy's theorem (continuity of the correspondence). Let $\{ \mu _ {n} \}$ be a sequence of probability measures and let $\{ \widehat \mu _ {n} \}$ be the sequence of their characteristic functions. Then $\{ \mu _ {n} \}$ weakly converges to some probability measure $\mu$( that is, $\int \phi d \mu _ {n} \rightarrow \int \phi d \mu$ for an arbitrary continuous bounded function $\phi$) if and only if $\{ \widehat \mu _ {n} ( t) \}$ converges at every point $t \in \mathbf R ^ {1}$ to some continuous function $f$; in the case of convergence, $f = \widehat \mu$. This implies that the relative compactness (in the sense of weak convergence) of a family of probability measures is equivalent to the equicontinuity at zero of the family of corresponding characteristic functions.

Bochner's theorem makes it possible to view the Fourier–Stieltjes transform as an isomorphism between the semi-group (under the operation of convolution) of probability measures on $\mathbf R ^ {1}$ and the semi-group (under pointwise multiplication) of positive-definite continuous functions on $\mathbf R ^ {1}$ that have at zero the value one. Lévy's theorem asserts that this algebraic isomorphism is also a topological homeomorphism if in the semi-group of probability measures one has in mind the topology of weak convergence, and in the semi-group of positive-definite functions the topology of uniform convergence on bounded sets.

Expressions are known for the characteristic functions of the basic probability measures (see [Lu], [F]). For example, the characteristic function of the Gaussian measure with mean $m$ and variance $\sigma ^ {2}$ is $\mathop{\rm exp} ( imt - \sigma ^ {2} t ^ {2} /2 )$.

For non-negative integer-valued random variables $X$ one uses, apart from the characteristic function, also its analogue: the generating function

$$\Phi _ {X} ( z) = \ \sum _ {k = 0 } ^ \infty z ^ {k} {\mathsf P} \{ X = k \} ,$$

which is connected with the characteristic function by the relation $\widehat \mu _ {X} ( t) = \Phi _ {X} ( e ^ {it} )$.

The characteristic function of a probability measure $\mu$ on a finite-dimensional space $\mathbf R ^ {n}$ is defined similarly:

$$\widehat \mu ( t) = \ \int\limits _ {\mathbf R ^ {n} } e ^ {i \langle t, x\rangle } d \mu ( x),\ \ t \in \mathbf R ^ {n} ,$$

where $\langle t, x\rangle$ denotes the scalar product. The facts stated above are also valid for characteristic functions of probability measures on $\mathbf R ^ {n}$.

#### References

 [Lu] E. Lukacs, "Characteristic functions", Griffin (1970) MR0346874 MR0259980 Zbl 0201.20404 Zbl 0198.23804 [F] W. Feller, "An introduction to probability theory and its applications", 2, Wiley (1971) [PR] Yu.V. [Yu.V. Prokhorov] Prohorov, Yu.A. Rozanov, "Probability theory, basic concepts. Limit theorems, random processes", Springer (1969) (Translated from Russian) MR0251754 [Z] V.M. Zolotarev, "One-dimensional stable distributions", Amer. Math. Soc. (1986) (Translated from Russian) MR0854867 Zbl 0589.60015