Namespaces
Variants
Actions

Convergence, types of

From Encyclopedia of Mathematics
(Redirected from Convergence in measure)
Jump to: navigation, search


2020 Mathematics Subject Classification: Primary: 54A20 [MSN][ZBL] $ \newcommand{\set}[1]{\left\{#1\right\}} \newcommand{\abs}[1]{\left|#1\right|} \newcommand{\norm}[1]{\left\|#1\right\|} $

One of the basic concepts of mathematical analysis, signifying that a mathematical object has a limit. In this sense one speaks of the convergence of a sequence of elements, convergence of a series, convergence of an infinite product, convergence of a continued fraction, convergence of an integral, etc. The concept of convergence arises, for example, in the study of mathematical objects and their approximation by simpler objects. Thus, in order to calculate the area of a circle, a sequence of areas of regular polygons inscribed in this circle is used; for the approximate calculation of integrals of functions, approximations are used involving piecewise-linear functions or, more generally, splines, etc. One can say that mathematical analysis begins at the moment when the concept of convergence has been introduced on a set of elements.

I. Convergence of sequences.

Various concepts of convergence of elements of a set can be applied to one and the same set of elements, depending on the problem under consideration. The concept of convergence plays an important role in the solution of various equations (algebraic, differential, integral, etc.) and particularly in finding approximate numerical solutions for them. For example, the method of sequential approximation makes it possible to obtain a sequence of functions that converges to the solution of a given ordinary differential equation, and to thereby simultaneously prove, given specific conditions, the existence of a solution and to provide a method that makes it possible to calculate this solution with the required accuracy. Both for ordinary and partial differential equations there are various convergent difference methods for their numerical solution, which are suitable for use in modern computers.

If a concept of convergence of sequences of elements of a set $X$ is introduced, i.e. a class is defined within the totality of all given sequences, every member of which is said to be a convergent sequence, while every convergent sequence corresponds to a certain element of $X$, called its limit, then the set $X$ itself is called a space with convergence.

It is usually required of a concept of convergence of sequences that it possess the following properties:

1) every sequence of elements of $X$ can have at most one limit;

2) every stationary sequence $(x,x,\ldots)$, $x\in X$, is convergent and the element $x$ is its limit;

3) every subsequence of a convergent sequence is also convergent and has the same limit as the whole sequence.

When these conditions are fulfilled, the space $X$ is often called a space with convergence in the sense of Fréchet. An example of such a space is any topological Hausdorff space, and consequently any metric space, especially any countably-normed space, and therefore any normed space (although by no means every semi-normed space). In order for a sequence to converge in a complete metric space it is necessary and sufficient that it be a Cauchy sequence.

An example of a non-metrizable space with convergence in the sense of Fréchet is the space of all real functions defined on the number axis $\R$, with pointwise convergence: the convergence of a sequence $f_n:\R \rightarrow \R$, $n=1,2,\ldots$ signifies its convergence for every fixed $x\in\R$.

If for every subset $A \subset X$ in the space $X$ with convergence in the sense of Fréchet one defines the sequential closure $\bar{A}$ as the totality of all points of $X$ that are limits of sequences of points belonging to $A$, then $X$ may prove not to be a topological space, since the closure of the closure $\bar{A}$ of every set $A$ in the given definition need not coincide with $\bar{A}$.

If two definitions of convergence are introduced on the same set, and if every sequence that converges in the sense of the first definition also converges in the sense of the second, then one says that the second convergence is stronger than the first. In every space $X$ with convergence it is possible to introduce a stronger convergence such that the operation of sequential closure thus generated makes $X$ a topological space, or, more concisely, every space with convergence can be imbedded in a topological space consisting of the same points.

On every topological space, the concept of convergence of sequences of points of the space is defined, but this definition is insufficient, generally speaking, to describe the closure of an arbitrary set in this space, i.e. to define the points of contact of the set; consequently, it is in general insufficient to describe the topology of the given space completely (a Fréchet–Urysohn space is one in which the topology is determined by the convergence of sequences) and so the concept of convergence of a "generalized sequence" is introduced.

A partially ordered set $\mathfrak{A} = (\mathfrak{A},\geq)$ is called a directed set if for any two elements there is an element following both of them. A mapping $f:\mathfrak{A}\rightarrow X$ of a directed set $\mathfrak{A}$ into a set $X$ is called a generalized sequence, a net or a directionality in $X$. A generalized sequence $f:\mathfrak{A}\rightarrow X$ in a topological space $X$ is said to be convergent to a point $x_0$ in $X$ if for every neighbourhood $U$ of $x_0$ there is an $\alpha_0 \in \mathfrak{A}$ such that for all $\alpha \geq \alpha_0$, $\alpha \in \mathfrak{A}$, the inclusion $f(\alpha) \in U$ holds. In this case one says that the limit of the generalized sequence $f:\mathfrak{A}\rightarrow X$ exists and is equal to $x_0$; this is denoted by $\lim_{\mathfrak{A}}f(\alpha) = x_0$.

In these terms, the closure of a set lying in a topological space $X$ is described in the following way: In order for a point $x$ to belong to the closure $\bar{A}$ of a set $A \subset X$ it is necessary and sufficient that a certain generalized sequence of points in $A$ converges to $x$; for a topological space to be a Hausdorff space, it is necessary and sufficient that every generalized sequence of points of it has at most one limit.

In terms of convergence of generalized sequences, it is also possible to formulate a criterion for the continuity of a mapping $F$ of a topological space $X$ into a topological space $Y$: For such a mapping $F$ to be continuous at a point $x_0 \in X$ it is necessary and sufficient that for every generalized sequence $f:\mathfrak{A}\rightarrow X$ for which $\lim_{\mathfrak{A}}f(\alpha) = x_0$, the condition $\lim_{\mathfrak{A}}F(f(\alpha)) = F(x_0)$ is fulfilled.

II. Convergence of sequences and series of numbers.

The simplest examples illustrating the concept of convergence are convergent sequences of numbers, i.e. sequences of complex numbers $(z_n)$ that have finite limits, and convergent series of numbers, i.e. series for which the sequence of partial sums converges. Convergent sequences and series of numbers are often used to obtain various estimates, while in numerical methods they are used for the approximate calculation of the values of functions and constants. In problems of this type, it is important to know the "rate" at which a given sequence converges to its limit. For example, the number $\pi$ can be represented in the form of a sum of series in the following two ways: $$ \pi = 4 \sum_{n=1}^\infty \frac{(-1)^{n-1}}{2n-1} = 4 \sum_{n=1}^\infty \frac{(-1)^{n-1}}{2n-1} \left(\frac{4}{5^{2n-1}} - \frac{1}{239^{2n-1}}\right). $$ It is clear that for the approximate calculation of the number $\pi$ with a sufficient degree of accuracy, it is advisable to use the second formula (Machin's formula), since it is possible, using the second formula, to achieve the same degree of accuracy in the calculation using a smaller number of terms of the series.

In order to compare the convergence of two series, the following definition is used. Let there be given two convergent series with non-negative terms \begin{equation} \label{eq1} \sum_{n=1}^\infty a_n, \quad a_n \geq 0, \end{equation} \begin{equation} \label{eq2} \sum_{n=1}^\infty b_n, \quad b_n \geq 0; \end{equation} and let $\alpha_n = \sum_{k=1}^\infty a_{n+k}$, $\beta_n = \sum_{k=1}^\infty b_{n+k}$ be their remainders of order $n=1,2,\ldots$. The series \eqref{eq1} is said to converge faster than the series \eqref{eq2}, or the series \eqref{eq2} is said to converge more slowly than the series \eqref{eq1}, which is equivalent, if $\alpha_n=o(\beta_n)$ as $n\rightarrow\infty$, i.e. if a null sequence $(\epsilon_n)$ exists such that $\alpha_n=\epsilon_n\beta_n$, $n=1,2,\ldots$.

If the series \eqref{eq1} and \eqref{eq2} are divergent and $s_n=\sum_{k=1}^n a_k$, $\sigma_n = \sum_{k=1}^n b_k$ are their partial sums of order $n=1,2,\ldots$ then \eqref{eq1} is said to diverge faster than \eqref{eq2}, or \eqref{eq2} is said to diverge more slowly than \eqref{eq1}, if $\sigma_n = o(s_n)$ as $n\rightarrow\infty$.

For every convergent series with non-negative terms there is a series, also with non-negative terms, that converges more slowly, while for every divergent series, there is one that diverges more slowly. Methods exist that make it possible to transform a given convergent series into one that converges faster without altering its sum. This can be done using, for example, the Abel transformation.

In addition to the ordinary concept of the sum of a series, indicated above, there are other, more general definitions of its sum, which are based on different methods of summation of series. By these methods some sequences consisting of the terms of a series are constructed instead of the sequence of partial sums. These sequences can be convergent in cases where the sequence of partial sums is divergent. The limits of these sequences are called generalized sums of the series.

The concept of faster convergence and divergence is also used for improper integrals, where one of the most widespread methods of acceleration of convergence (divergence) of integrals is the method of integration by parts. There are also other methods of averaging improper integrals that are analogous to methods of summation of series, and that make it possible to give a definition of generalized convergence for certain divergent integrals.

III. Convergence of series and sequences of functions.

In the case of sequences of functions \begin{equation} \label{eq3} f_n : X \rightarrow Y, \quad n = 1,2,\ldots, \end{equation} under corresponding assumptions on the sets $X$ and $Y$, various concepts of convergence exist illustrating the wide variety of concrete realizations of this concept. If $Y$ is a topological space, and if the sequence \eqref{eq3} converges for every fixed $x\in X$, then it is said to be (pointwise) convergent on the set $X$. If $Y$ is a uniform space (in particular, a metric space or a topological group), then it is possible to introduce the concept of a uniformly-convergent sequence.

Let $X=(X,S,\mu)$ be a measure space (i.e. $X$ a set, $S$ a $\sigma$-algebra of subsets of $X$, and $\mu$ a real-valued measure on $S$), let $Y = \bar{\R} = \R \cup \{\pm\infty\}$ be the extended set of real numbers $\R$, and let the functions \begin{equation} \label{eq4} f_n : X \rightarrow \bar{\R}, \quad n = 1,2,\ldots, \end{equation} be almost-everywhere finite and measurable.

The sequence \eqref{eq4} is said to be almost-everywhere convergent to a function $f : X \rightarrow \bar{\R}$ if there exists a set $X_0 \subset X$ of measure zero such that the restrictions of the functions \eqref{eq4} to $X \setminus X_0$ converge on this set to the restriction of $f$ to it. If the sequence \eqref{eq4} converges almost-everywhere to a function $f$, then this function is also almost-everywhere finite and measurable. The link between almost-everywhere convergence of a sequence and uniform convergence is established by the Egorov theorem.

The sequence \eqref{eq4} is said to converge in measure on the set $X$ to a measurable function $f : X \rightarrow \bar{\R}$ if, for any $\epsilon > 0$, the condition $$ \lim_{n\rightarrow\infty} \mu\set{ x\in X : \abs{f_n(x) - f(x)} \geq \epsilon } = 0 $$ is fulfilled.

If the sequence \eqref{eq4} converges almost-everywhere to a function $f$ and $\mu(X) < \infty$, then it converges to $f$ in measure as well, while if the sequence \eqref{eq4} converges to $f$ in measure, then there exists a subsequence of \eqref{eq4} that converges to $f$ almost-everywhere.

For a function $f : X \rightarrow \bar{\R}$, let \begin{equation} \label{eq5} \norm{f}_p = \left( \int_X \abs{f(x)}^p \rd x \right)^{1/p} \quad 1 \leq p < \infty, \end{equation} \begin{equation} \label{eq6} \norm{f}_\infty = \mathop{\mathrm{ess\,sup}}_{x\in X} \abs{f(x)} \end{equation} and let $L_p(X)$ be the space of function $f$ for which \begin{equation} \label{eq7} \norm{f}_p \leq \infty, \quad 1 \leq p \leq \infty. \end{equation} These spaces are usually called Lebesgue spaces. On equivalence classes relative to the measure $\mu$ of functions for which condition \eqref{eq7} is fulfilled, the functional $\norm{\cdot}_p$ is a norm (see Convergence in norm).

If the sequence \eqref{eq4} converges in the norm \eqref{eq6} to a function, then it converges to this function almost-everywhere. If a sequence $f_n\in L_p(X)$, $n=1,2,\ldots$, converges in the norm $\norm{\cdot}_p$, $1 \leq p \leq \infty$, to a function $f : X \rightarrow \bar{\R}$, then $f\in L_p(X)$ and the given sequence is said to be convergent to $f$ in the space $L_p(X)$. Convergence in the norm $\norm{\cdot}_p$, $1 \leq p \leq \infty$, is also called strong convergence in the space $L_p(X)$, or, when $1 \leq p < \infty$, convergence in the mean of order $p$; in more detail, when $p=1$, it is called convergence in the mean, and when $p=2$, convergence in the sense of the quadratic mean. An example of sequences of functions converging in the sense of the quadratic mean are sequences of partial sums of Fourier series of functions belonging to the space $L_2[-\pi,\pi]$.

If the sequence \eqref{eq4} converges in $L_p(X)$, $1 \leq p \leq \infty$, to a function $f$, then it converges to $f$ on the set $X$ in measure as well, and it is therefore possible to extract a subsequence from \eqref{eq4} that will converge to $f$ almost-everywhere on $X$. If $1 \leq p \leq q \leq \infty$, if $\mu(X) < \infty$ and if the sequence \eqref{eq4} converges in $L_q(X)$, then it also converges in $L_p(X)$.

A sequence \eqref{eq4} of functions $f_n \in L_p(X)$, $1<p<\infty$, is said to be weakly convergent in $L_p(X)$ to a function $f\in L_p(X)$ if for every function $g\in L_q(X)$, where $\frac{1}{p} + \frac{1}{q} = 1$, $$ \lim_{n\rightarrow\infty} \int_X \bigl( f_n(x) - f(x) \bigr) g(x) \rd x = 0. $$ If a sequence $f_n\in L_p(X)$, $n=1,2,\ldots,$ converges strongly in $L_p(X)$, $1 < p < \infty$, then it also converges weakly to the same function; but there exist weakly-convergent sequences in $L_p(X)$ that do not converge strongly. For example, the sequence of functions $\sin nx$, $n=1,2,\ldots$, converges weakly to zero in $L_2[-\pi,\pi]$, but does not converge strongly. In fact, for every function $g\in L_2[-\pi,\pi]$ the integrals $$ \frac{1}{\pi}\int_{-\pi}^\pi g(x) \sin nx \rd x $$ are the Fourier coefficients of $g$ with respect to the system $(\sin nx)$ and therefore tend to zero as $n\rightarrow 0$; however, $\norm{\sin nx}_2 = \sqrt{\pi}$, $n=1,2,\ldots$.

The limits of sequences of functions that converge almost-everywhere, or in measure, or in the sense of strong or weak convergence in $L_p(X)$, are, in the case of a complete measure $\mu$, defined uniquely up to functions that are equivalent relative to $\mu$.

Generalizations of the Lebesgue space $L_p(X)$ include the Nikol'skii space, the Orlicz space, the Sobolev space, and a number of others.

The concept of strong and weak convergence can be generalized to include more general spaces, in particular normed linear spaces.

Other concepts of convergence of a sequence of functions arise in the theory of generalized functions. For example, let $D$ be the space of test functions, which consists of infinitely-differentiable functions $f:\R \rightarrow \R$ with compact support. A sequence $f_n\in D$, $n=1,2,\ldots$, is said to be convergent to $f$ in the space $D$ if there exists an interval $[a,b]$ such that the supports of all functions $f_n$, $n=1,2,\ldots$ and $f$ are contained in it, while the sequences $\bigl(f_n^{(k)}\bigr)$ of the functions $f_n$ themselves and all their derivatives converge uniformly on $[a,b]$ respectively to $f^{(k)}$, $k=0,1,\ldots$. In the study of the Fourier transforms of generalized functions, other spaces of test functions with convergence are examined.

The different forms of convergence listed above are used in studying various questions of mathematical analysis. Thus, the concept of uniform convergence makes it possible to formulate conditions under which continuity is retained in a limit transition. For example, if $X$ is a topological space and $Y$ is a metric space, if the terms of the sequence \eqref{eq3} are continuous on $X$, and the sequence \eqref{eq3} converges uniformly on $X$, then the limit function is also continuous on $X$. In terms of the concept of almost-everywhere convergence or convergence in the mean of order $p$, it is possible to formulate conditions for limit transition under the integral sign. If $X$ is a space with a measure $\mu$, if $Y=\bar{\R}$, if the sequence $f_n\in L_1(X)$, $n=1,2,\ldots$ converges almost-everywhere on $X$, and if a function $F \in L_1(X)$ exists such that for almost-all $x\in X$ and all $n=1,2,\ldots$ the inequality $\abs{f_n(x)}\leq F(x)$ is fulfilled, then \begin{equation} \label{eq8} \lim_{n \rightarrow \infty} \int_X f_n(x) \rd x = \int_X \left( \lim_{n \rightarrow \infty} f_n(x) \right) \rd x. \end{equation} If $\mu(X) < \infty$, $f_n\in L_p(X)$, $n=1,2,\ldots$, $1<p<\infty$, and if the sequence $(f_n)$ converges weakly (strongly) in $L_p(X)$, then formula \ref{eq8} holds.

In probability theory one uses for sequences of random variables the phrase "almost-certain (or almost-sure) convergence" (convergence with probability one, cf. Convergence, almost-certain), for almost-everywhere convergence; convergence in probability, for convergence in measure; and the concept of convergence in distribution.

A generalization of the concept of convergence of a sequence of functions is convergence with respect to a certain parameter of a family of functions belonging to a certain topological space.

Mathematicians in ancient times (Euclid, Archimedes) used the concept of convergence in using series to find areas and volumes. They reasoned that by the method of exhaustion, they could prove the convergence of series. The term "convergence" was introduced in the context of series in 1668 by J. Gregory in his research on the methods of calculating the area of a disc and of a hyperbolic sector. Mathematicians in the 17th century usually had a fairly clear picture of the convergence of the series they used, but they could not produce proofs of this convergence that are strict in the modern sense. In the 18th century, the deliberate use of divergent series became widespread in mathematical analysis (especially in the work of L. Euler). This resulted, on the one hand, in many misunderstandings and errors which were not eliminated until a clear theory of convergence was developed, and on the other hand, in an early version of the modern theory of summation of divergent series. Strict methods for studying the convergence of series were worked out in the 19th century by A.L. Cauchy, N.H. Abel, B. Bolzano, K. Weierstrass, and others. The concept of uniform convergence was formulated in the work of Abel (1826), P. Seidel (1847–1848), G. Stokes (1847–1848) and Cauchy (1853), and began to be used systematically in Weierstrass' lectures on mathematical analysis in the late 1850's. Further extensions of the concept of convergence arose in the development of function theory, functional analysis and topology.

References

[Al] P.S. [P.S. Aleksandrov] Aleksandroff, "Einführung in die Mengenlehre und in die allgemeine Topologie", Deutsch. Verlag Wissenschaft. (1984) (Translated from Russian)
[IlPo] V.A. Il'in, E.G. Poznyak, "Fundamentals of mathematical analysis", 1–2, MIR (1982) (Translated from Russian)
[Ke] J.L. Kelley, "General topology", Springer (1975) MR0370454 Zbl 0306.54002
[KoFo] A.N. Kolmogorov, S.V. Fomin, "Elements of the theory of functions and functional analysis", 1–2, Graylock (1957–1961) (Translated from Russian) MR1025126 MR0708717 MR0630899 MR0435771 MR0377444 MR0234241 MR0215962 MR0118796 MR1530727 MR0118795 MR0085462 MR0070045 Zbl 0932.46001 Zbl 0672.46001 Zbl 0501.46001 Zbl 0501.46002 Zbl 0235.46001 Zbl 0103.08801
[Ku] L.D. Kudryavtsev, "A course in mathematical analysis", 1–2, Moscow (1981) (In Russian) MR0628614 MR0619214 Zbl 0485.26002 Zbl 0485.26001
[Ni] S.M. Nikol'skii, "A course of mathematical analysis", 1–2, MIR (1977) (Translated from Russian) Zbl 0397.00003 Zbl 0384.00004


Comments

The statement under which \ref{eq8} is fulfilled is generally known as Lebesgue's dominated convergence theorem.

A null sequence is a sequence converging to zero. The essential supremum of a non-negative measurable function $g:X\rightarrow \R$ (where $(X,\mu)$ a measure space) is the infimum of the set $S$ of all $\alpha \in \R$ such that $$ \mu\left( g^{-1}\left( (\alpha,\infty] \right) \right) = 0 $$ (if $S=\emptyset$, one puts $\inf S = \infty$). The essential supremum, $\norm{f}_\infty$, of an arbitrary (complex-valued) measurable function $f$ on $X$ is the essential supremum of $\abs{f}$ (cf. [Ru]).

References

[Ha] P.R. Halmos, "Measure theory", v. Nostrand (1950) MR0033869 Zbl 0040.16802
[Ru] W. Rudin, "Real and complex analysis", McGraw-Hill (1974) pp. 24 MR0344043 Zbl 0278.26001
How to Cite This Entry:
Convergence in measure. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Convergence_in_measure&oldid=43326