Namespaces
Variants
Actions

Difference between revisions of "Mathematical analysis"

From Encyclopedia of Mathematics
Jump to: navigation, search
m
m (tex encoded by computer)
Line 1: Line 1:
 +
<!--
 +
m0626101.png
 +
$#A+1 = 125 n = 0
 +
$#C+1 = 125 : ~/encyclopedia/old_files/data/M062/M.0602610 Mathematical analysis
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
The part of mathematics in which functions (cf. [[Function|Function]]) and their generalizations are studied by the method of limits (cf. [[Limit|Limit]]). The concept of limit is closely connected with that of an infinitesimal quantity, therefore it could be said that mathematical analysis studies functions and their generalizations by infinitesimal methods.
 
The part of mathematics in which functions (cf. [[Function|Function]]) and their generalizations are studied by the method of limits (cf. [[Limit|Limit]]). The concept of limit is closely connected with that of an infinitesimal quantity, therefore it could be said that mathematical analysis studies functions and their generalizations by infinitesimal methods.
  
Line 10: Line 22:
  
 
==Functions.==
 
==Functions.==
Mathematical analysis began with the definition of a function by N.I. Lobachevskii and P.G.L. Dirichlet. If to each number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m0626101.png" />, from some set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m0626102.png" /> of numbers, is associated by some rule a number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m0626103.png" />, then this defines a function
+
Mathematical analysis began with the definition of a function by N.I. Lobachevskii and P.G.L. Dirichlet. If to each number $  x $,  
 +
from some set $  F $
 +
of numbers, is associated by some rule a number $  y $,  
 +
then this defines a function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m0626104.png" /></td> </tr></table>
+
$$
 +
= f ( x)
 +
$$
  
of one variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m0626105.png" />. A function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m0626106.png" /> variables,
+
of one variable $  x $.  
 +
A function of $  n $
 +
variables,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m0626107.png" /></td> </tr></table>
+
$$
 +
f ( x)  = f
 +
( x _ {1} \dots x _ {n} ),
 +
$$
  
is defined similarly, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m0626108.png" /> is a point of an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m0626109.png" />-dimensional space; one also considers functions
+
is defined similarly, where $  x = ( x _ {1} \dots x _ {n} ) $
 +
is a point of an $  n $-
 +
dimensional space; one also considers functions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261010.png" /></td> </tr></table>
+
$$
 +
f ( x)  = \
 +
( x _ {1} , x _ {2} ,\dots )
 +
$$
  
of points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261011.png" /> of some infinite-dimensional space. These, however, are usually called functionals.
+
of points $  x = ( x _ {1} , x _ {2} ,\dots) $
 +
of some infinite-dimensional space. These, however, are usually called functionals.
  
 
==Elementary functions.==
 
==Elementary functions.==
In mathematical analysis the [[Elementary functions|elementary functions]] are of fundamental importance. Basically, in practice, one operates with the elementary functions and more complicated functions are approximated by them. The elementary functions can be considered not only for real but also for complex <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261012.png" />; then the conception of these functions becomes in some sense, complete. In this connection an important branch of mathematics has arisen, called the theory of functions of a complex variable, or the theory of analytic functions (cf. [[Analytic function|Analytic function]]).
+
In mathematical analysis the [[Elementary functions|elementary functions]] are of fundamental importance. Basically, in practice, one operates with the elementary functions and more complicated functions are approximated by them. The elementary functions can be considered not only for real but also for complex $  x $;  
 +
then the conception of these functions becomes in some sense, complete. In this connection an important branch of mathematics has arisen, called the theory of functions of a complex variable, or the theory of analytic functions (cf. [[Analytic function|Analytic function]]).
  
 
==Real numbers.==
 
==Real numbers.==
Line 34: Line 63:
  
 
==Continuous functions.==
 
==Continuous functions.==
An important class of functions studied in mathematical analysis is formed by the continuous functions (cf. [[Continuous function|Continuous function]]). One of the possible definitions of this notion is: A function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261013.png" />, of a variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261014.png" /> from an open interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261015.png" />, is called continuous at the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261016.png" /> if
+
An important class of functions studied in mathematical analysis is formed by the continuous functions (cf. [[Continuous function|Continuous function]]). One of the possible definitions of this notion is: A function $  y = f ( x) $,  
 +
of a variable $  x $
 +
from an open interval $  ( a , b ) $,  
 +
is called continuous at the point $  x $
 +
if
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261017.png" /></td> </tr></table>
+
$$
 +
\lim\limits _
 +
{\Delta x \rightarrow 0 } \
 +
\Delta y  = \
 +
\lim\limits _
 +
{\Delta x \rightarrow 0 } \
 +
[ f ( x + \Delta x ) - f ( x) ]  = 0 .
 +
$$
  
A function is continuous on the open interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261018.png" /> if it is continuous at each of its points; its graph is then a curve which is continuous in the everyday sense of the word.
+
A function is continuous on the open interval $  ( a , b ) $
 +
if it is continuous at each of its points; its graph is then a curve which is continuous in the everyday sense of the word.
  
 
==Derivative and differential.==
 
==Derivative and differential.==
 
Among the continuous functions those having a [[Derivative|derivative]] must be distinguished. The derivative of a function
 
Among the continuous functions those having a [[Derivative|derivative]] must be distinguished. The derivative of a function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261019.png" /></td> </tr></table>
+
$$
 +
= f ( x) ,\ \
 +
a < x < b ,
 +
$$
  
at a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261020.png" /> is its rate of change at that point, that is, the limit
+
at a point $  x $
 +
is its rate of change at that point, that is, the limit
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261021.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
\lim\limits _
 +
{\Delta x \rightarrow 0 } \
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261022.png" /> is the coordinate at the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261023.png" /> of a point moving along the coordinate axis, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261024.png" /> is its instantaneous velocity at the time <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261025.png" />.
+
\frac{\Delta y }{\Delta x }
 +
  = \
 +
\lim\limits _
 +
{\Delta x \rightarrow 0 } \
  
From the sign of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261026.png" /> one can judge the nature of variation of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261027.png" />: If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261028.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261029.png" />) in an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261030.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261031.png" /> is increasing (decreasing) on this interval. If a function attains a local extremum (a maximum or a minimum) at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261032.png" /> and has a derivative at this point, then the latter is equal to zero, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261033.png" />.
+
\frac{f ( x + \Delta x )
 +
- f ( x) }{\Delta x }
 +
  = \
 +
f ^ { \prime } ( x) .
 +
$$
 +
 
 +
If  $  y $
 +
is the coordinate at the time  $  x $
 +
of a point moving along the coordinate axis, then  $  f ^ { \prime } ( x) $
 +
is its instantaneous velocity at the time  $  x $.
 +
 
 +
From the sign of $  f ^ { \prime } $
 +
one can judge the nature of variation of $  f $:  
 +
If  $  f ^ { \prime } > 0 $(
 +
$  f ^ { \prime } < 0 $)  
 +
in an interval $  ( c , d ) $,  
 +
then $  f $
 +
is increasing (decreasing) on this interval. If a function attains a local extremum (a maximum or a minimum) at $  x $
 +
and has a derivative at this point, then the latter is equal to zero, $  f ^ { \prime } ( x) = 0 $.
  
 
The equality (1) can be replaced by the equivalent equality
 
The equality (1) can be replaced by the equivalent equality
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261034.png" /></td> </tr></table>
+
$$
 +
 
 +
\frac{\Delta y }{\Delta x }
 +
  = \
 +
f ^ { \prime } ( x) + \epsilon
 +
( \Delta x ) ,\ \
 +
\epsilon ( \Delta x ) \rightarrow 0  \textrm{ as }  \Delta x \rightarrow 0 ,
 +
$$
  
 
or
 
or
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261035.png" /></td> </tr></table>
+
$$
 +
\Delta y  = f ^ { \prime }
 +
( x) \Delta x + \Delta x
 +
\epsilon ( \Delta x ) ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261036.png" /> is an infinitesimal as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261037.png" />; that is, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261038.png" /> has a derivative at <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261039.png" />, then its increment at this point decomposes into two terms. The first
+
where $  \epsilon ( \Delta x ) $
 +
is an infinitesimal as $  \Delta x \rightarrow 0 $;  
 +
that is, if $  f $
 +
has a derivative at $  x $,  
 +
then its increment at this point decomposes into two terms. The first
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261040.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
d y  = f ^ { \prime } ( x) \Delta x
 +
$$
  
is a linear function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261041.png" /> (is proportional to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261042.png" />), the second term tends to zero more rapidly than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261043.png" />.
+
is a linear function of $  \Delta x $(
 +
is proportional to $  \Delta x $),  
 +
the second term tends to zero more rapidly than $  \Delta x $.
  
The quantity (2) is called the [[Differential|differential]] of the function corresponding to the increment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261044.png" />. For small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261045.png" /> it is possible to regard <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261046.png" /> as approximately equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261047.png" />:
+
The quantity (2) is called the [[Differential|differential]] of the function corresponding to the increment $  \Delta x $.  
 +
For small $  \Delta x $
 +
it is possible to regard $  \Delta y $
 +
as approximately equal to $  d y $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261048.png" /></td> </tr></table>
+
$$
 +
\Delta y  \approx  d y .
 +
$$
  
 
These arguments about differentials are characteristic of mathematical analysis. They have been extended to functions of several variables and to functionals.
 
These arguments about differentials are characteristic of mathematical analysis. They have been extended to functions of several variables and to functionals.
Line 75: Line 167:
 
For example, if a function
 
For example, if a function
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261049.png" /></td> </tr></table>
+
$$
 +
= f ( x _ {1} \dots x _ {n} )  = f ( x)
 +
$$
 +
 
 +
of  $  n $
 +
variables has continuous partial derivatives (cf. [[Partial derivative|Partial derivative]]) at a point  $  x = ( x _ {1} \dots x _ {n} ) $,
 +
then its increment  $  \Delta z $
 +
corresponding to increments  $  \Delta x _ {1} \dots \Delta x _ {n} $
 +
of the independent variables can be written in the form
  
of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261050.png" /> variables has continuous partial derivatives (cf. [[Partial derivative|Partial derivative]]) at a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261051.png" />, then its increment <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261052.png" /> corresponding to increments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261053.png" /> of the independent variables can be written in the form
+
$$ \tag{3 }
 +
\Delta z  = \
 +
\sum _ { k= } 1 ^ { n }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261054.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
\frac{\partial  f }{\partial  x _ {k} }
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261055.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261056.png" />, that is, if all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261057.png" />. Here the first term on the right-hand side in (3) is the differential <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261058.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261059.png" />. It depends linearly on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261060.png" /> and the second term tends to zero more rapidly than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261061.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261062.png" />.
+
\Delta x _ {k} +
 +
\sqrt {
 +
\sum _ { k= } 1 ^ { n }
 +
\Delta x _ {k}  ^ {2} }  \epsilon
 +
( \Delta x ) ,
 +
$$
 +
 
 +
where $  \epsilon ( \Delta x ) \rightarrow 0 $
 +
as $  \Delta x = ( \Delta x _ {1} \dots \Delta x _ {n} ) \rightarrow 0 $,  
 +
that is, if all $  \Delta x _ {k} \rightarrow 0 $.  
 +
Here the first term on the right-hand side in (3) is the differential $  d z $
 +
of $  f $.  
 +
It depends linearly on $  \Delta x $
 +
and the second term tends to zero more rapidly than $  \Delta x $
 +
as $  \Delta x \rightarrow 0 $.
  
 
Suppose one is given a functional (see [[Variational calculus|Variational calculus]])
 
Suppose one is given a functional (see [[Variational calculus|Variational calculus]])
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261063.png" /></td> </tr></table>
+
$$
 +
J ( x)  = \
 +
\int\limits _ { t _ {0} } ^ { {t _ 1 } }
 +
L ( t , x , x  ^  \prime  )  d t ,
 +
$$
 +
 
 +
extended over the class $  \mathfrak M $
 +
of functions  $  x $
 +
having continuous derivatives on the closed interval  $  [ t _ {0} , t _ {1} ] $
 +
and satisfying the boundary conditions  $  x ( t _ {0} ) = x _ {0} $,
 +
$  x ( t _ {1} ) = x _ {1} $,
 +
where  $  x _ {0} $
 +
and  $  x _ {1} $
 +
are given numbers. Let, further,  $  \mathfrak M _ {0} $
 +
be the class of functions  $  h $
 +
having continuous derivatives on  $  [ t _ {0} , t _ {1} ] $
 +
and such that  $  h ( t _ {0} ) = h ( t _ {1} ) = 0 $.
 +
Obviously, if  $  x \in \mathfrak M $
 +
and  $  h \in \mathfrak M _ {0} $,
 +
then  $  x + h \in \mathfrak M $.
 +
 
 +
In variational calculus it has been proved that under certain conditions on  $  L $
 +
the increment of  $  J ( x) $
 +
can be written in the form
 +
 
 +
$$ \tag{4 }
 +
J ( x + h ) - J ( x)  = \
 +
\int\limits _ { t _ {0} } ^ { {t _ 1 } }
 +
\left (
 +
 
 +
\frac{\partial  L }{\partial  x }
 +
-
  
extended over the class <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261064.png" /> of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261065.png" /> having continuous derivatives on the closed interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261066.png" /> and satisfying the boundary conditions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261067.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261068.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261069.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261070.png" /> are given numbers. Let, further, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261071.png" /> be the class of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261072.png" /> having continuous derivatives on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261073.png" /> and such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261074.png" />. Obviously, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261075.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261076.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261077.png" />.
+
\frac{d }{d t }
  
In variational calculus it has been proved that under certain conditions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261078.png" /> the increment of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261079.png" /> can be written in the form
+
\left (
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261080.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
\frac{\partial  L }{\partial  x }
  
as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261081.png" />, where
+
\right )  \right )
 +
h ( t)  d t + o ( \| h \| )
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261082.png" /></td> </tr></table>
+
as  $  \| h \| \rightarrow 0 $,
 +
where
  
and, thus, the second term on the right-hand side of (4) tends to zero more rapidly than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261083.png" />, whereas the first term depends linearly on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261084.png" />. The first term in (4) is called the variation of the functional <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261085.png" /> and is denoted by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261086.png" />.
+
$$
 +
\| h \|  = \
 +
\max _
 +
{t _ {0} \leq  t \leq  t _ {1} } \
 +
| h ( t) | +
 +
\max _
 +
{t _ {0} \leq  t \leq  t _ {1} } \
 +
| h  ^  \prime  ( t) | ,
 +
$$
 +
 
 +
and, thus, the second term on the right-hand side of (4) tends to zero more rapidly than $  \| h \| $,  
 +
whereas the first term depends linearly on $  h \in \mathfrak M _ {0} $.  
 +
The first term in (4) is called the variation of the functional $  J ( x , h ) $
 +
and is denoted by $  \delta J ( x, h ) $.
  
 
==Integrals.==
 
==Integrals.==
 
Side by side with the derivative, the integral has a fundamental significance in mathematical analysis. One distinguishes indefinite and definite integrals.
 
Side by side with the derivative, the integral has a fundamental significance in mathematical analysis. One distinguishes indefinite and definite integrals.
  
The indefinite integral is closely connected with primitive functions. A function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261087.png" /> is called a primitive function of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261088.png" /> on the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261089.png" /> if, on this interval, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261090.png" />.
+
The indefinite integral is closely connected with primitive functions. A function $  F $
 +
is called a primitive function of a function $  f $
 +
on the interval $  ( a , b ) $
 +
if, on this interval, $  F ^ { \prime } = f $.
  
The definite (Riemann) integral of a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261091.png" /> on an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261092.png" /> is the limit
+
The definite (Riemann) integral of a function $  f $
 +
on an interval $  [ a , b ] $
 +
is the limit
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261093.png" /></td> </tr></table>
+
$$
 +
\lim\limits \
 +
\sum _ { j= } 0 ^ { N- }  1
 +
f ( \xi _ {j} ) ( x _ {j+} 1 - x _ {j} )  = \
 +
\int\limits _ { a } ^ { b }  f ( x)  d x
 +
$$
  
as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261094.png" />; here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261095.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261096.png" /> are arbitrary.
+
as $  \max  ( x _ {j+} 1 - x _ {j} ) \rightarrow 0 $;  
 +
here $  a = x _ {0} < x _ {1} < \dots < x _ {N} = b $
 +
and $  x _ {j} \leq  \xi _ {j} \leq  x _ {j+} 1 $
 +
are arbitrary.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261097.png" /> is positive and continuous on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261098.png" />, its integral on this segment is equal to the area of the figure bounded by the curve <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m06261099.png" />, the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610100.png" />-axis and the lines <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610101.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610102.png" />.
+
If $  f $
 +
is positive and continuous on $  [ a , b ] $,  
 +
its integral on this segment is equal to the area of the figure bounded by the curve $  y = f ( x) $,  
 +
the $  x $-
 +
axis and the lines $  x = a $
 +
and $  x = b $.
  
The class of Riemann-integrable functions contains all continuous functions on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610103.png" /> and some discontinuous ones. But they are all necessarily bounded. For a slowly-growing unbounded function, and also for certain functions on unbounded intervals, the so-called [[Improper integral|improper integral]] has been introduced, requiring a double limit transition in its definition.
+
The class of Riemann-integrable functions contains all continuous functions on $  [ a , b ] $
 +
and some discontinuous ones. But they are all necessarily bounded. For a slowly-growing unbounded function, and also for certain functions on unbounded intervals, the so-called [[Improper integral|improper integral]] has been introduced, requiring a double limit transition in its definition.
  
 
The concept of a Riemann integral of a function of one variable can be extended to functions of several variables (see [[Multiple integral|Multiple integral]]).
 
The concept of a Riemann integral of a function of one variable can be extended to functions of several variables (see [[Multiple integral|Multiple integral]]).
Line 121: Line 303:
 
There is a connection between derivatives and integrals, expressed by the Newton–Leibniz formula (theorem):
 
There is a connection between derivatives and integrals, expressed by the Newton–Leibniz formula (theorem):
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610104.png" /></td> </tr></table>
+
$$
 +
\int\limits _ { a } ^ { b }
 +
f ( x)  d x  = F ( b) - F ( a) .
 +
$$
  
Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610105.png" /> is a continuous function on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610106.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610107.png" /> is its primitive function.
+
Here $  f $
 +
is a continuous function on $  [ a , b ] $
 +
and $  F $
 +
is its primitive function.
  
 
==Taylor's formulas and series.==
 
==Taylor's formulas and series.==
Along with derivatives and integrals, the most important ideas (research tools) in mathematical analysis are the [[Taylor formula|Taylor formula]] and [[Taylor series|Taylor series]]. If a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610108.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610109.png" />, has continuous derivatives up to and including order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610110.png" /> in a neighbourhood of a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610111.png" />, then it can be approximated in this neighbourhood by the polynomial
+
Along with derivatives and integrals, the most important ideas (research tools) in mathematical analysis are the [[Taylor formula|Taylor formula]] and [[Taylor series|Taylor series]]. If a function $  f ( x) $,
 +
$  a < x < b $,  
 +
has continuous derivatives up to and including order $  n $
 +
in a neighbourhood of a point $  x _ {0} $,  
 +
then it can be approximated in this neighbourhood by the polynomial
 +
 
 +
$$
 +
P _ {n} ( x)  = \
 +
f ( x _ {0} ) +
 +
 
 +
\frac{f ^ { \prime } ( x _ {0} ) }{1 ! }
 +
 
 +
( x - x _ {0} ) + \dots +
 +
 
 +
\frac{f ^ { ( n) } ( x _ {0} ) }{n ! }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610112.png" /></td> </tr></table>
+
( x - x _ {0} )  ^ {n} ,
 +
$$
  
called its Taylor polynomial (of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610113.png" />), in powers of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610114.png" />:
+
called its Taylor polynomial (of degree $  n $),  
 +
in powers of $  x - x _ {0} $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610115.png" /></td> </tr></table>
+
$$
 +
f ( x)  \approx  P _ {n} ( x)
 +
$$
  
 
(Taylor's formula); here the error of approximation,
 
(Taylor's formula); here the error of approximation,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610116.png" /></td> </tr></table>
+
$$
 +
R _ {n} ( x)  = f ( x) - P _ {n} ( x),
 +
$$
  
tends to zero faster than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610117.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610118.png" />:
+
tends to zero faster than $  ( x - x _ {0} )  ^ {n} $
 +
as $  x \rightarrow x _ {0} $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610119.png" /></td> </tr></table>
+
$$
 +
R _ {n} ( x)  = o ( ( x - x _ {0} )  ^ {n} ) \  \textrm{ as }  x \rightarrow x _ {0} .
 +
$$
  
Thus, in a neighbourhood of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610120.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610121.png" /> can be approximated to any degree of accuracy by very simple functions (polynomials), which for their calculation require only the arithmetic operations of addition, subtraction and multiplication.
+
Thus, in a neighbourhood of $  x _ {0} $,  
 +
$  f $
 +
can be approximated to any degree of accuracy by very simple functions (polynomials), which for their calculation require only the arithmetic operations of addition, subtraction and multiplication.
  
Of special importance are the so-called analytic functions in a fixed neighbourhood of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610122.png" />; they have an infinite number of derivatives, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610123.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610124.png" /> in the neighbourhood, and they may be represented by the infinite Taylor series
+
Of special importance are the so-called analytic functions in a fixed neighbourhood of $  x _ {0} $;  
 +
they have an infinite number of derivatives, $  R _ {n} ( x) \rightarrow 0 $
 +
as $  n \rightarrow \infty $
 +
in the neighbourhood, and they may be represented by the infinite Taylor series
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/m/m062/m062610/m062610125.png" /></td> </tr></table>
+
$$
 +
f ( x)  = f ( x _ {0} ) +
 +
 
 +
\frac{f ^ { \prime } ( x _ {0} ) }{1 ! }
 +
 
 +
( x - x _ {0} ) + \dots .
 +
$$
  
 
Taylor expansions are also possible, under certain conditions, for functions of several variables, functionals and operators.
 
Taylor expansions are also possible, under certain conditions, for functions of several variables, functionals and operators.
Line 155: Line 377:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  Ch.J. de la Valleé-Poussin,  "Cours d'analyse infinitésimales" , '''1–2''' , Libraire Univ. Louvain  (1923–1925)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.A. Il'in,  E.G. Poznyak,  "Fundamentals of mathematical analysis" , '''2''' , MIR  (1982)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.A. Il'in,  V.A. Sadovnichii,  B.Kh. Sendov,  "Mathematical analysis" , Moscow  (1979)  (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  L.D. Kudryavtsev,  "A course in mathematical analysis" , '''1–3''' , Moscow  (1988–1989)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  S.M. Nikol'skii,  "A course of mathematical analysis" , '''1–2''' , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  E.T. Whittaker,  G.N. Watson,  "A course of modern analysis" , Cambridge Univ. Press  (1952)  pp. Chapt. 6</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  G.M. Fichtenholz,  "Differential und Integralrechnung" , '''1–3''' , Deutsch. Verlag Wissenschaft.  (1964)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  Ch.J. de la Valleé-Poussin,  "Cours d'analyse infinitésimales" , '''1–2''' , Libraire Univ. Louvain  (1923–1925)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  V.A. Il'in,  E.G. Poznyak,  "Fundamentals of mathematical analysis" , '''2''' , MIR  (1982)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  V.A. Il'in,  V.A. Sadovnichii,  B.Kh. Sendov,  "Mathematical analysis" , Moscow  (1979)  (In Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  L.D. Kudryavtsev,  "A course in mathematical analysis" , '''1–3''' , Moscow  (1988–1989)  (In Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top">  S.M. Nikol'skii,  "A course of mathematical analysis" , '''1–2''' , MIR  (1977)  (Translated from Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top">  E.T. Whittaker,  G.N. Watson,  "A course of modern analysis" , Cambridge Univ. Press  (1952)  pp. Chapt. 6</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top">  G.M. Fichtenholz,  "Differential und Integralrechnung" , '''1–3''' , Deutsch. Verlag Wissenschaft.  (1964)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Revision as of 07:59, 6 June 2020


The part of mathematics in which functions (cf. Function) and their generalizations are studied by the method of limits (cf. Limit). The concept of limit is closely connected with that of an infinitesimal quantity, therefore it could be said that mathematical analysis studies functions and their generalizations by infinitesimal methods.

The name "mathematical analysis" is a short version of the old name of this part of mathematics, "infinitesimal analysis"; the latter more fully describes the content, but even it is an abbreviation (the name "analysis by means of infinitesimals" would characterize the subject more precisely). In classical mathematical analysis the objects of study (analysis) were first and foremost functions. "First and foremost" because the development of mathematical analysis has led to the possibility of studying, by its methods, forms more complicated than functions: functionals, operators, etc.

Everywhere in nature and technology one meets motions and processes which are characterized by functions; the laws of natural phenomena also are usually described by functions. Hence the objective importance of mathematical analysis as a means of studying functions.

Mathematical analysis, in the broad sense of the term, includes a very large part of mathematics. It includes differential calculus; integral calculus; the theory of functions of a real variable (cf. Functions of a real variable, theory of); the theory of functions of a complex variable (cf. Functions of a complex variable, theory of); approximation theory; the theory of ordinary differential equations (cf. Differential equation, ordinary); the theory of partial differential equations (cf. Differential equation, partial); the theory of integral equations (cf. Integral equation); differential geometry; variational calculus; functional analysis; harmonic analysis; and certain other mathematical disciplines. Modern number theory and probability theory use and develop methods of mathematical analysis.

Nevertheless, the term "mathematical analysis" is often used as a name for the foundations of mathematical analysis, which unifies the theory of real numbers (cf. Real number), the theory of limits, the theory of series, differential and integral calculus, and their immediate applications such as the theory of maxima and minima, the theory of implicit functions (cf. Implicit function), Fourier series, and Fourier integrals (cf. Fourier integral).

Functions.

Mathematical analysis began with the definition of a function by N.I. Lobachevskii and P.G.L. Dirichlet. If to each number $ x $, from some set $ F $ of numbers, is associated by some rule a number $ y $, then this defines a function

$$ y = f ( x) $$

of one variable $ x $. A function of $ n $ variables,

$$ f ( x) = f ( x _ {1} \dots x _ {n} ), $$

is defined similarly, where $ x = ( x _ {1} \dots x _ {n} ) $ is a point of an $ n $- dimensional space; one also considers functions

$$ f ( x) = \ ( x _ {1} , x _ {2} ,\dots ) $$

of points $ x = ( x _ {1} , x _ {2} ,\dots) $ of some infinite-dimensional space. These, however, are usually called functionals.

Elementary functions.

In mathematical analysis the elementary functions are of fundamental importance. Basically, in practice, one operates with the elementary functions and more complicated functions are approximated by them. The elementary functions can be considered not only for real but also for complex $ x $; then the conception of these functions becomes in some sense, complete. In this connection an important branch of mathematics has arisen, called the theory of functions of a complex variable, or the theory of analytic functions (cf. Analytic function).

Real numbers.

The concept of a function is essentially founded on the concept of a real (rational or irrational) number. The latter was finally formulated only at the end of the 19th century. In particular, it established a logically irreproachable connection between numbers and points of a geometrical line, which gave a formal foundation for the ideas of R. Descartes (mid 17th century), who introduced into mathematics rectangular coordinate systems and the representation of functions by graphs.

Limits.

In mathematical analysis a means of studying functions is the limit. One distinguishes between the limit of a sequence and the limit of a function. These concepts were finally formulated only in the 19th century; however, the idea of a limit had been studied by the ancient Greeks. It suffices to say that Archimedes (3rd century B.C.) was able to calculate the area of a segment of a parabola by a process which one would call a limit transition (see Exhaustion, method of).

Continuous functions.

An important class of functions studied in mathematical analysis is formed by the continuous functions (cf. Continuous function). One of the possible definitions of this notion is: A function $ y = f ( x) $, of a variable $ x $ from an open interval $ ( a , b ) $, is called continuous at the point $ x $ if

$$ \lim\limits _ {\Delta x \rightarrow 0 } \ \Delta y = \ \lim\limits _ {\Delta x \rightarrow 0 } \ [ f ( x + \Delta x ) - f ( x) ] = 0 . $$

A function is continuous on the open interval $ ( a , b ) $ if it is continuous at each of its points; its graph is then a curve which is continuous in the everyday sense of the word.

Derivative and differential.

Among the continuous functions those having a derivative must be distinguished. The derivative of a function

$$ y = f ( x) ,\ \ a < x < b , $$

at a point $ x $ is its rate of change at that point, that is, the limit

$$ \tag{1 } \lim\limits _ {\Delta x \rightarrow 0 } \ \frac{\Delta y }{\Delta x } = \ \lim\limits _ {\Delta x \rightarrow 0 } \ \frac{f ( x + \Delta x ) - f ( x) }{\Delta x } = \ f ^ { \prime } ( x) . $$

If $ y $ is the coordinate at the time $ x $ of a point moving along the coordinate axis, then $ f ^ { \prime } ( x) $ is its instantaneous velocity at the time $ x $.

From the sign of $ f ^ { \prime } $ one can judge the nature of variation of $ f $: If $ f ^ { \prime } > 0 $( $ f ^ { \prime } < 0 $) in an interval $ ( c , d ) $, then $ f $ is increasing (decreasing) on this interval. If a function attains a local extremum (a maximum or a minimum) at $ x $ and has a derivative at this point, then the latter is equal to zero, $ f ^ { \prime } ( x) = 0 $.

The equality (1) can be replaced by the equivalent equality

$$ \frac{\Delta y }{\Delta x } = \ f ^ { \prime } ( x) + \epsilon ( \Delta x ) ,\ \ \epsilon ( \Delta x ) \rightarrow 0 \textrm{ as } \Delta x \rightarrow 0 , $$

or

$$ \Delta y = f ^ { \prime } ( x) \Delta x + \Delta x \epsilon ( \Delta x ) , $$

where $ \epsilon ( \Delta x ) $ is an infinitesimal as $ \Delta x \rightarrow 0 $; that is, if $ f $ has a derivative at $ x $, then its increment at this point decomposes into two terms. The first

$$ \tag{2 } d y = f ^ { \prime } ( x) \Delta x $$

is a linear function of $ \Delta x $( is proportional to $ \Delta x $), the second term tends to zero more rapidly than $ \Delta x $.

The quantity (2) is called the differential of the function corresponding to the increment $ \Delta x $. For small $ \Delta x $ it is possible to regard $ \Delta y $ as approximately equal to $ d y $:

$$ \Delta y \approx d y . $$

These arguments about differentials are characteristic of mathematical analysis. They have been extended to functions of several variables and to functionals.

For example, if a function

$$ z = f ( x _ {1} \dots x _ {n} ) = f ( x) $$

of $ n $ variables has continuous partial derivatives (cf. Partial derivative) at a point $ x = ( x _ {1} \dots x _ {n} ) $, then its increment $ \Delta z $ corresponding to increments $ \Delta x _ {1} \dots \Delta x _ {n} $ of the independent variables can be written in the form

$$ \tag{3 } \Delta z = \ \sum _ { k= } 1 ^ { n } \frac{\partial f }{\partial x _ {k} } \Delta x _ {k} + \sqrt { \sum _ { k= } 1 ^ { n } \Delta x _ {k} ^ {2} } \epsilon ( \Delta x ) , $$

where $ \epsilon ( \Delta x ) \rightarrow 0 $ as $ \Delta x = ( \Delta x _ {1} \dots \Delta x _ {n} ) \rightarrow 0 $, that is, if all $ \Delta x _ {k} \rightarrow 0 $. Here the first term on the right-hand side in (3) is the differential $ d z $ of $ f $. It depends linearly on $ \Delta x $ and the second term tends to zero more rapidly than $ \Delta x $ as $ \Delta x \rightarrow 0 $.

Suppose one is given a functional (see Variational calculus)

$$ J ( x) = \ \int\limits _ { t _ {0} } ^ { {t _ 1 } } L ( t , x , x ^ \prime ) d t , $$

extended over the class $ \mathfrak M $ of functions $ x $ having continuous derivatives on the closed interval $ [ t _ {0} , t _ {1} ] $ and satisfying the boundary conditions $ x ( t _ {0} ) = x _ {0} $, $ x ( t _ {1} ) = x _ {1} $, where $ x _ {0} $ and $ x _ {1} $ are given numbers. Let, further, $ \mathfrak M _ {0} $ be the class of functions $ h $ having continuous derivatives on $ [ t _ {0} , t _ {1} ] $ and such that $ h ( t _ {0} ) = h ( t _ {1} ) = 0 $. Obviously, if $ x \in \mathfrak M $ and $ h \in \mathfrak M _ {0} $, then $ x + h \in \mathfrak M $.

In variational calculus it has been proved that under certain conditions on $ L $ the increment of $ J ( x) $ can be written in the form

$$ \tag{4 } J ( x + h ) - J ( x) = \ \int\limits _ { t _ {0} } ^ { {t _ 1 } } \left ( \frac{\partial L }{\partial x } - \frac{d }{d t } \left ( \frac{\partial L }{\partial x } \right ) \right ) h ( t) d t + o ( \| h \| ) $$

as $ \| h \| \rightarrow 0 $, where

$$ \| h \| = \ \max _ {t _ {0} \leq t \leq t _ {1} } \ | h ( t) | + \max _ {t _ {0} \leq t \leq t _ {1} } \ | h ^ \prime ( t) | , $$

and, thus, the second term on the right-hand side of (4) tends to zero more rapidly than $ \| h \| $, whereas the first term depends linearly on $ h \in \mathfrak M _ {0} $. The first term in (4) is called the variation of the functional $ J ( x , h ) $ and is denoted by $ \delta J ( x, h ) $.

Integrals.

Side by side with the derivative, the integral has a fundamental significance in mathematical analysis. One distinguishes indefinite and definite integrals.

The indefinite integral is closely connected with primitive functions. A function $ F $ is called a primitive function of a function $ f $ on the interval $ ( a , b ) $ if, on this interval, $ F ^ { \prime } = f $.

The definite (Riemann) integral of a function $ f $ on an interval $ [ a , b ] $ is the limit

$$ \lim\limits \ \sum _ { j= } 0 ^ { N- } 1 f ( \xi _ {j} ) ( x _ {j+} 1 - x _ {j} ) = \ \int\limits _ { a } ^ { b } f ( x) d x $$

as $ \max ( x _ {j+} 1 - x _ {j} ) \rightarrow 0 $; here $ a = x _ {0} < x _ {1} < \dots < x _ {N} = b $ and $ x _ {j} \leq \xi _ {j} \leq x _ {j+} 1 $ are arbitrary.

If $ f $ is positive and continuous on $ [ a , b ] $, its integral on this segment is equal to the area of the figure bounded by the curve $ y = f ( x) $, the $ x $- axis and the lines $ x = a $ and $ x = b $.

The class of Riemann-integrable functions contains all continuous functions on $ [ a , b ] $ and some discontinuous ones. But they are all necessarily bounded. For a slowly-growing unbounded function, and also for certain functions on unbounded intervals, the so-called improper integral has been introduced, requiring a double limit transition in its definition.

The concept of a Riemann integral of a function of one variable can be extended to functions of several variables (see Multiple integral).

On the other hand, the needs of mathematical analysis have led to a generalization of the integral in quite another direction, in the form of the Lebesgue integral or, more generally, the Lebesgue–Stieltjes integral. Essential in the definition of these integrals is the introduction for certain sets, called measurable, of their measure and, on this foundation, the notion of a measurable function. For measurable functions the Lebesgue–Stieltjes integral has been introduced. In this connection a broad range of different measures has been considered, together with the associated classes of measurable sets and functions. This provides an opportunity to adapt this or that integral to a definite concrete problem.

Newton–Leibniz formula.

There is a connection between derivatives and integrals, expressed by the Newton–Leibniz formula (theorem):

$$ \int\limits _ { a } ^ { b } f ( x) d x = F ( b) - F ( a) . $$

Here $ f $ is a continuous function on $ [ a , b ] $ and $ F $ is its primitive function.

Taylor's formulas and series.

Along with derivatives and integrals, the most important ideas (research tools) in mathematical analysis are the Taylor formula and Taylor series. If a function $ f ( x) $, $ a < x < b $, has continuous derivatives up to and including order $ n $ in a neighbourhood of a point $ x _ {0} $, then it can be approximated in this neighbourhood by the polynomial

$$ P _ {n} ( x) = \ f ( x _ {0} ) + \frac{f ^ { \prime } ( x _ {0} ) }{1 ! } ( x - x _ {0} ) + \dots + \frac{f ^ { ( n) } ( x _ {0} ) }{n ! } ( x - x _ {0} ) ^ {n} , $$

called its Taylor polynomial (of degree $ n $), in powers of $ x - x _ {0} $:

$$ f ( x) \approx P _ {n} ( x) $$

(Taylor's formula); here the error of approximation,

$$ R _ {n} ( x) = f ( x) - P _ {n} ( x), $$

tends to zero faster than $ ( x - x _ {0} ) ^ {n} $ as $ x \rightarrow x _ {0} $:

$$ R _ {n} ( x) = o ( ( x - x _ {0} ) ^ {n} ) \ \textrm{ as } x \rightarrow x _ {0} . $$

Thus, in a neighbourhood of $ x _ {0} $, $ f $ can be approximated to any degree of accuracy by very simple functions (polynomials), which for their calculation require only the arithmetic operations of addition, subtraction and multiplication.

Of special importance are the so-called analytic functions in a fixed neighbourhood of $ x _ {0} $; they have an infinite number of derivatives, $ R _ {n} ( x) \rightarrow 0 $ as $ n \rightarrow \infty $ in the neighbourhood, and they may be represented by the infinite Taylor series

$$ f ( x) = f ( x _ {0} ) + \frac{f ^ { \prime } ( x _ {0} ) }{1 ! } ( x - x _ {0} ) + \dots . $$

Taylor expansions are also possible, under certain conditions, for functions of several variables, functionals and operators.

Historical information.

Up to the 17th century mathematical analysis was a collection of solutions to disconnected particular problems; for example, in the integral calculus, the problems of the calculation of the areas of figures, the volumes of bodies with curved boundaries, the work done by a variable force, etc. Each problem, or special group of problems, was solved by its own method, sometimes complicated and tedious and sometimes even brilliant (regarding the prehistory of mathematical analysis see Infinitesimal calculus). Mathematical analysis as a unified and systematic whole was put together in the works of I. Newton, G. Leibniz, L. Euler, J.L. Lagrange, and other scholars in the 17th century and 18th century, and its foundations, the theory of limits, was laid by A.L. Cauchy at the beginning of the 19th century. A deep analysis of the original ideas of mathematical analysis was connected with the development in the 19th century and 20th century of set theory, measure theory and the theory of functions of a real variable, and has led to a variety of generalizations.

References

[1] Ch.J. de la Valleé-Poussin, "Cours d'analyse infinitésimales" , 1–2 , Libraire Univ. Louvain (1923–1925)
[2] V.A. Il'in, E.G. Poznyak, "Fundamentals of mathematical analysis" , 2 , MIR (1982) (Translated from Russian)
[3] V.A. Il'in, V.A. Sadovnichii, B.Kh. Sendov, "Mathematical analysis" , Moscow (1979) (In Russian)
[4] L.D. Kudryavtsev, "A course in mathematical analysis" , 1–3 , Moscow (1988–1989) (In Russian)
[5] S.M. Nikol'skii, "A course of mathematical analysis" , 1–2 , MIR (1977) (Translated from Russian)
[6] E.T. Whittaker, G.N. Watson, "A course of modern analysis" , Cambridge Univ. Press (1952) pp. Chapt. 6
[7] G.M. Fichtenholz, "Differential und Integralrechnung" , 1–3 , Deutsch. Verlag Wissenschaft. (1964)

Comments

In 1961, A. Robinson provided truly infinitesimal methods in analysis with a logical foundation, and so vindicated the founders of the calculus, Leibniz in particular, against the now usual "-d" analysis. This new old way to look at analysis is spreading since twenty years and might become of first importance in a few years. See [a4] and Non-standard analysis.

References

[a1] E.A. Bishop, "Foundations of constructive analysis" , McGraw-Hill (1967)
[a2] G.E. Shilov, "Mathematical analysis" , 1–2 , M.I.T. (1974) (Translated from Russian)
[a3] R. Courant, H. Robbins, "What is mathematics?" , Oxford Univ. Press (1980)
[a4] N. Cutland (ed.) , Nonstandard analysis and its applications , Cambridge Univ. Press (1988)
[a5] G.H. Hardy, "A course of pure mathematics" , Cambridge Univ. Press (1975)
[a6] E.C. Titchmarsh, "The theory of functions" , Oxford Univ. Press (1979)
[a7] W. Rudin, "Principles of mathematical analysis" , McGraw-Hill (1976) pp. 75–78
[a8] K.R. Stromberg, "Introduction to classical real analysis" , Wadsworth (1981)
How to Cite This Entry:
Mathematical analysis. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Mathematical_analysis&oldid=31489
This article was adapted from an original article by S.M. Nikol'skii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article