Namespaces
Variants
Actions

Summation methods

From Encyclopedia of Mathematics
Revision as of 08:24, 6 June 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search


summability methods

Methods for constructing generalized sums of series, generalized limits of sequences, and values of improper integrals.

In mathematical analysis, the need arises to generalize the concept of the sum of a series (limit of a sequence, value of an integral) to include the case where the series (sequence, integral) diverges in the ordinary sense. This generalization usually takes the form of a rule or operation, and is called a summation method.

1) The Fourier series of a continuous $ 2 \pi $- periodic function $ f $ can be divergent on an infinite set of points $ E \subset [ 0, 2 \pi ] $. The sequence $ \{ \sigma _ {n} ( x) \} $ of arithmetical averages of the first $ n $ partial sums of this series,

$$ \tag{1 } \sigma _ {n} ( x) = \frac{s _ {0} ( x) + \dots + s _ {n} ( x) }{n+} 1 , $$

converges uniformly on the whole $ x $- axis to $ f $. If the sum of the series is defined as

$$ \lim\limits _ {n \rightarrow \infty } \sigma _ {n} ( x), $$

then in this sense the Fourier series of $ f $ will converge uniformly to $ f $ on the whole $ x $- axis.

2) A series

$$ \tag{2 } \sum _ { n= } 0 ^ \infty c _ {n} , $$

obtained as a result of multiplying two series

$$ \sum _ { n= } 0 ^ \infty a _ {n} \ \textrm{ and } \ \ \sum _ { n= } 0 ^ \infty b _ {n} , $$

which converge respectively to $ A $ and $ B $, may prove to be divergent. If the sum of the series (2) is defined as in example 1), i.e. as the limit of the sequence of arithmetical averages of the first $ n $ partial sums, then in this sense the product of the two given series will converge to the sum $ C = AB $.

3) The power series

$$ \tag{3 } \sum _ { n= } 0 ^ \infty z ^ {n} $$

converges for $ | z | < 1 $ to the sum $ 1/( 1- z) $ and diverges for $ | z | \geq 1 $. If the sum of (3) is defined as

$$ \lim\limits _ {x \rightarrow \infty } e ^ {-} x \sum _ { n= } 0 ^ \infty \frac{s _ {n} x ^ {n} }{n!} , $$

where $ s _ {n} $ are the partial sums of (3), then in this sense (3) will converge for all $ z $ that satisfy the condition $ \mathop{\rm Re} z < 1 $, and its sum will be the function $ 1/( 1- z) $( see Borel summation method).

The most important properties of summation methods are regularity (see Regular summation methods) and linearity (see Linear summation method). The most common summation methods possess these properties. Many methods also possess the property of translativity (see Translativity of a summation method). Matrix summation methods and semi-continuous summation methods form a broad class of summation methods (cf. Matrix summation method; Semi-continuous summation method). These methods are linear, and regularity conditions have been constructed for them. Matrix summation methods include, in particular, the Voronoi summation method and the Cesàro summation methods. Methods defined by matrices with a finite number of rows (see Row-finite summation method) and particularly by triangular matrices (see Triangular summation method) form a subclass of the matrix summation methods. Among the semi-continuous summation methods are the Abel summation method, the Borel summation method, the Mittag-Leffler summation method, the Lindelöf summation method, and the Riesz summation method. Summation methods of other forms than the forms shown do exist, such as the Borel integral summation method and the Hölder summation methods.

The same sequence (series) may be summable by one method, but not by another. The set of all sequences (series) that are summable by a given method is called the summability field of the given method.

If one considers two summation methods, and the summability field of one method contains the summability field of the other, then one speaks of the inclusion of summation methods; when the fields coincide, one speaks of the equivalence of the summation methods. If the field of a summation method consists only of convergent sequences, then one says that the summation method is equivalent to convergence. Establishing conditions under which the inclusion of summation methods can occur is one of the problems of the theory of summability. Two or more summation methods may be compatible or non-compatible. Summation methods are said to be compatible if they cannot sum one and the same sequence to different limits. In those cases where from the summability of the series

$$ \sum _ { k= } 0 ^ \infty u _ {k} $$

by a method $ A $ it follows that the series

$$ \sum _ { k= } 0 ^ \infty \lambda _ {k} u _ {k} $$

is summable by a method $ B $, one says that the numbers $ \lambda _ {k} $ are summability multipliers of type $ ( A, B) $.

There are two distinct types of theorems on summation methods. In theorems of the first (Abelian) type, the properties of a sequence enable one to infer the properties of the averages of this sequence obtained as a result of the transformation defining the summation method. For example, Cauchy's theorem establishes that $ ( s _ {0} + \dots + s _ {n} )/( n+ 1) \rightarrow s $ always follows from $ s _ {n} \rightarrow s $. In theorems of the second (Tauberian) type, the properties of the averages corresponding to the given summation method plus additional conditions enable one to infer the properties of the transformed sequence (see Tauberian theorems).

By analogy with ordinary convergence, concepts of special forms of summability are introduced: absolute summability, unconditional summability, strong summability, almost-summability, $ \lambda $- summability, and others.

The concept of a generalized limit has also been applied to functions and integrals. In these cases one speaks of the summation of a function (or of an integral). For example, for a function $ s( y) $, defined for all $ y $, a summation method analogous to the matrix summation method of sequences consists of the fact that an integral transform

$$ t( x) = \int\limits _ { 0 } ^ \infty c( x, y) s( y) dy $$

with a kernel $ c( x, y) $ is examined, and the number $ s $ is associated to the function $ s( y) $ as its generalized limit, as $ y \rightarrow \infty $, if

$$ \lim\limits _ {x \rightarrow \infty } t( x) = s. $$

In the same way, one of the summation methods of improper integrals

$$ \tag{4 } \int\limits _ { 0 } ^ \infty a( t) dt $$

consists of the fact that transforms

$$ \gamma ( x) = \int\limits _ { 0 } ^ { x } K( x, t) a( t) dt $$

with a kernel $ K( x, t) $ are examined, and the integral (4) is called summable to the value $ s $ if

$$ \lim\limits _ {x \rightarrow \infty } \gamma ( x) = s. $$

The definition of a summation method, introduced for the summation of sequences of numbers and functions, is generalized to include sequences of elements of any set, and a general definition of a summation method can be formulated thus: Let $ X $ be a given set, let $ s( X) $ be a set of sequences $ x = \{ \xi _ {n} \} $ with elements $ \xi _ {n} \in X $, and let $ \overline{A}\; $ be an operator defined on a subset $ A ^ \star \subset s( X) $ with values in $ X $. The pair $ ( \overline{A}\; , A ^ \star ) $ is then called a summation method defined on $ s( X) $, and $ A ^ \star $ is called the summability field. In this case, a sequence $ x \in A ^ \star $( or the series $ \sum _ {k=} 0 ^ \infty u _ {k} $ with terms $ u _ {k} = \xi _ {k} - \xi _ {k-} 1 $) is said to be summable to the limit $ \overline{A}\; ( x) $, where $ \overline{A}\; ( x) = \overline{A}\; x $.

References

[1] G.H. Hardy, "Divergent series" , Clarendon Press (1949)
[2] R.G. Cooke, "Infinite matrices and sequence spaces" , Macmillan (1950)
[3] G.F. Kangro, "Theory of summability of sequences and series" J. Soviet Math. , 5 : 1 (1970) pp. 1–45 Itogi Nauk. i Tekhn. Mat. Anal. , 12 (1974) pp. 5–70
[4] S.A. Baron, "Introduction to the theory of summability of series" , Tartu (1966) (In Russian)
[5] A. Peyerimhoff, "Lectures on summability" , Springer (1969)
[6] K. Knopp, "Theorie und Anwendung der unendlichen Reihen" , Springer (1964) (English translation: Blackie, 1951 & Dover, reprint, 1990)
[7] K. Zeller, W. Beekmann, "Theorie der Limitierungsverfahren" , Springer (1970)
[8] H. Pitt, "Tauberian theorems" , Oxford Univ. Press (1958)
[9] T.H. Ganelius, "Tauberian remainder theorems" , Springer (1971)
[10] G.M. Petersen, "Regular matrix transformations" , McGraw-Hill (1966)
[11] A.L. Brudno, "Summation of bounded sequences by matrices" Mat. Sb. , 16 (58) : 2 (1945) pp. 191–247 (In Russian) ((English abstract))

Comments

References

[a1] C.N. Moore, "Summable sequences and convergence factors" , Dover, reprint (1966)
How to Cite This Entry:
Summation methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Summation_methods&oldid=48907
This article was adapted from an original article by I.I. Volkov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article