Namespaces
Variants
Actions

Difference between revisions of "Small parameter, method of the"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (MR/ZBL numbers added)
m (tex encoded by computer)
 
(2 intermediate revisions by the same user not shown)
Line 1: Line 1:
 +
<!--
 +
s0858201.png
 +
$#A+1 = 278 n = 0
 +
$#C+1 = 278 : ~/encyclopedia/old_files/data/S085/S.0805820 Small parameter, method of the,
 +
Automatically converted into TeX, above some diagnostics.
 +
Please remove this comment and the {{TEX|auto}} line below,
 +
if TeX found to be correct.
 +
-->
 +
 +
{{TEX|auto}}
 +
{{TEX|done}}
 +
 
''in the theory of differential equations''
 
''in the theory of differential equations''
  
Line 4: Line 16:
  
 
==1. The method of the small parameter for ordinary differential equations.==
 
==1. The method of the small parameter for ordinary differential equations.==
Ordinary differential equations arising from applied problems usually contain one or more parameters. Parameters may also occur in the initial data or boundary conditions. Since an exact solution of a differential equation can only be found in very special isolated cases, the problem of constructing approximate solutions arises. A typical scenario is: the equation and the initial (boundary) conditions contain a parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s0858201.png" /> and the solution is known (or may be assumed known) for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s0858202.png" />; the requirement is to construct an approximate solution for values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s0858203.png" /> close to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s0858204.png" />, that is, to construct an asymptotic solution as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s0858205.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s0858206.png" /> is a "small" parameter. The method of the small parameter arises, e.g., in the [[Three-body problem|three-body problem]] of celestial mechanics, which goes back to J. d'Alembert, and was intensively developed at the end of the 19th century.
+
Ordinary differential equations arising from applied problems usually contain one or more parameters. Parameters may also occur in the initial data or boundary conditions. Since an exact solution of a differential equation can only be found in very special isolated cases, the problem of constructing approximate solutions arises. A typical scenario is: the equation and the initial (boundary) conditions contain a parameter $  \lambda $
 +
and the solution is known (or may be assumed known) for $  \lambda = \lambda _ {0} $;  
 +
the requirement is to construct an approximate solution for values $  \lambda $
 +
close to $  \lambda _ {0} $,  
 +
that is, to construct an asymptotic solution as $  \epsilon \rightarrow 0 $,  
 +
where $  \epsilon = \lambda - \lambda _ {0} $
 +
is a "small" parameter. The method of the small parameter arises, e.g., in the [[Three-body problem|three-body problem]] of celestial mechanics, which goes back to J. d'Alembert, and was intensively developed at the end of the 19th century.
  
The following notations are used below: <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s0858207.png" /> is an independent variable, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s0858208.png" /> is a small parameter, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s0858209.png" /> is an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582010.png" />, and the sign <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582011.png" /> denotes asymptotic equality. All vector and matrix functions which appear in equations and boundary conditions are assumed to be smooth (of class <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582012.png" />) with respect to all variables in their domain (with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582013.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582014.png" /> or <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582015.png" />).
+
The following notations are used below: $  t $
 +
is an independent variable, $  \epsilon > 0 $
 +
is a small parameter, $  I $
 +
is an interval 0 \leq  t \leq  T $,  
 +
and the sign $  \sim $
 +
denotes asymptotic equality. All vector and matrix functions which appear in equations and boundary conditions are assumed to be smooth (of class $  C  ^  \infty  $)  
 +
with respect to all variables in their domain (with respect to $  \epsilon $
 +
for  $  0 \leq  \epsilon \leq  \epsilon _ {0} $
 +
or $  | \epsilon | \leq  \epsilon _ {0} $).
  
1) The [[Cauchy problem|Cauchy problem]] for an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582016.png" />-th order system:
+
1) The [[Cauchy problem|Cauchy problem]] for an $  n $-
 +
th order system:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582017.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$ \tag{1 }
 +
\dot{x}  = \
 +
f ( t, x, \epsilon ),\ \
 +
x ( 0)  = \
 +
x _ {0} ( \epsilon ).
 +
$$
  
Let the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582018.png" /> of the limit problem (that is, (1) with <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582019.png" />) exist and be unique for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582020.png" />. Then there is an asymptotic expansion for the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582021.png" /> of (1) as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582022.png" />,
+
Let the solution $  \phi _ {0} ( t) $
 +
of the limit problem (that is, (1) with $  \epsilon = 0 $)  
 +
exist and be unique for $  t \in I $.  
 +
Then there is an asymptotic expansion for the solution $  x ( t, \epsilon ) $
 +
of (1) as $  \epsilon \rightarrow 0 $,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582023.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
x ( t, \epsilon )  \sim \
 +
\phi _ {0} ( t) +
 +
\sum _ {j = 1 } ^  \infty 
 +
\epsilon  ^ {j}
 +
\phi _ {j} ( t),
 +
$$
  
which holds uniformly with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582024.png" />. This follows from the theorem on the smooth dependence on the parameter of the solution of a system of ordinary differential equations. If the vector functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582025.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582026.png" /> are holomorphic for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582027.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582028.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582029.png" />, then the series in (2) converges to a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582030.png" /> for sufficiently small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582031.png" /> uniformly relative to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582032.png" /> (Poincaré's theorem). Similar results hold for boundary value problems for systems of the form (1), if the solution of the corresponding limit problem exists and is unique.
+
which holds uniformly with respect to $  t \in I $.  
 +
This follows from the theorem on the smooth dependence on the parameter of the solution of a system of ordinary differential equations. If the vector functions $  f $
 +
and $  x _ {0} $
 +
are holomorphic for $  | \epsilon | \leq  \epsilon _ {0} $,  
 +
$  x = \phi _ {0} ( t) $,  
 +
$  t \in I $,  
 +
then the series in (2) converges to a solution $  x ( t, \epsilon ) $
 +
for sufficiently small $  | \epsilon | $
 +
uniformly relative to $  t \in I $(
 +
Poincaré's theorem). Similar results hold for boundary value problems for systems of the form (1), if the solution of the corresponding limit problem exists and is unique.
  
One distinguishes two forms of dependence of equations (or systems) on a small parameter — regular and singular. A system in normal form depends regularly on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582033.png" /> if all its right-hand sides are smooth functions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582034.png" /> for small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582035.png" />; otherwise the system depends singularly on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582036.png" />. When the system depends regularly on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582037.png" />, the solution of the problem with a parameter, as a rule, converges uniformly on a finite <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582038.png" />-interval as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582039.png" /> to a solution of the limit problem.
+
One distinguishes two forms of dependence of equations (or systems) on a small parameter — regular and singular. A system in normal form depends regularly on $  \epsilon $
 +
if all its right-hand sides are smooth functions of $  \epsilon $
 +
for small $  \epsilon \geq  0 $;  
 +
otherwise the system depends singularly on $  \epsilon $.  
 +
When the system depends regularly on $  \epsilon $,  
 +
the solution of the problem with a parameter, as a rule, converges uniformly on a finite $  t $-
 +
interval as $  \epsilon \rightarrow 0 $
 +
to a solution of the limit problem.
  
2) In the linear theory one considers <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582040.png" />-th order systems which depend singularly on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582041.png" />:
+
2) In the linear theory one considers $  n $-
 +
th order systems which depend singularly on $  \epsilon $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582042.png" /></td> </tr></table>
+
$$
 +
\epsilon \dot{x}  = \
 +
A ( t, \epsilon ) x +
 +
f ( t, \epsilon ),
 +
$$
  
where the entries of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582043.png" />-matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582044.png" /> and the components of the vector <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582045.png" /> are complex-valued functions. The central problem in the linear theory is the construction of a [[Fundamental system of solutions|fundamental system of solutions]] of the homogeneous system (that is, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582046.png" />), the asymptotic behaviour of which as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582047.png" /> is known throughout the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582048.png" />.
+
where the entries of the $  ( n \times n) $-
 +
matrix $  A $
 +
and the components of the vector $  f $
 +
are complex-valued functions. The central problem in the linear theory is the construction of a [[Fundamental system of solutions|fundamental system of solutions]] of the homogeneous system (that is, for $  f \equiv 0 $),  
 +
the asymptotic behaviour of which as $  \epsilon \rightarrow 0 $
 +
is known throughout the interval $  I $.
  
The basic result of the linear theory is the following theorem of Birkhoff. Let: 1) the eigenvalues <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582049.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582050.png" />, of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582051.png" /> be distinct for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582052.png" />; and 2) the quantities
+
The basic result of the linear theory is the following theorem of Birkhoff. Let: 1) the eigenvalues $  \lambda _ {j} ( t, 0) $,  
 +
$  1 \leq  j \leq  n $,  
 +
of $  A ( t, 0) $
 +
be distinct for $  t \in I $;  
 +
and 2) the quantities
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582053.png" /></td> </tr></table>
+
$$
 +
\mathop{\rm Re} ( \lambda _ {j} ( t, 0) - \lambda _ {k} ( t, 0)),\ \
 +
1 \leq  j, k \leq  n,\ \
 +
j \neq k,
 +
$$
  
not change sign. Then there is a fundamental system of solutions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582054.png" /> of the homogeneous system
+
not change sign. Then there is a fundamental system of solutions $  x _ {1} ( t, \epsilon ) \dots x _ {n} ( t, \epsilon ) $
 +
of the homogeneous system
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582055.png" /></td> </tr></table>
+
$$
 +
\epsilon \dot{x}  = \
 +
A ( t, \epsilon ) x
 +
$$
  
for which there is the following asymptotic expansion as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582056.png" />:
+
for which there is the following asymptotic expansion as $  \epsilon \rightarrow 0 $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582057.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
x _ {j} ( t, \epsilon )  \sim \
 +
\mathop{\rm exp} \left [ \epsilon  ^ {-} 1
 +
\int\limits _ {t _ {0} } ^ { t }
 +
\lambda _ {j} ( \tau , \epsilon ) \
 +
d \tau \right ]
 +
\sum _ {k = 0 } ^  \infty 
 +
\epsilon  ^ {k}
 +
\phi _ {kj} ( t),
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582058.png" /></td> </tr></table>
+
$$
 +
1  \leq  j  \leq  n.
 +
$$
  
This expansion is uniform relative to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582059.png" /> and can be differentiated any number of times with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582060.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582061.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582062.png" /> does not depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582063.png" />, that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582064.png" />, then
+
This expansion is uniform relative to $  t \in I $
 +
and can be differentiated any number of times with respect to $  t $
 +
and $  \epsilon $.  
 +
If $  A $
 +
does not depend on $  \epsilon $,  
 +
that is, $  A = A ( t) $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582065.png" /></td> </tr></table>
+
$$
 +
\phi _ {0j} ( t)  = \
 +
\mathop{\rm exp} \left [ -
 +
\int\limits _ {t _ {0} } ^ { t }
 +
\left (
 +
e _ {j}  ^ {*} ( \tau ),\
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582066.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582067.png" /> are left and right eigenvectors of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582068.png" /> normalized by
+
\frac{de _ {j} ( \tau ) }{d \tau }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582069.png" /></td> </tr></table>
+
\right ) \
 +
d \tau \right ]
 +
e _ {j} ( t),
 +
$$
 +
 
 +
where  $  e _ {j}  ^ {*} $,
 +
$  e _ {j} $
 +
are left and right eigenvectors of  $  A ( t) $
 +
normalized by
 +
 
 +
$$
 +
( e _ {j}  ^ {*} ( t), e _ {j} ( t))  \equiv  1,\ \
 +
t \in I.
 +
$$
  
 
Solutions having asymptotic behaviour of the form (3) are called WKB solutions (see [[WKB method|WKB method]]). The qualitative structure of these solutions is as follows. If
 
Solutions having asymptotic behaviour of the form (3) are called WKB solutions (see [[WKB method|WKB method]]). The qualitative structure of these solutions is as follows. If
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582070.png" /></td> </tr></table>
+
$$
 +
\mathop{\rm Re}  \lambda _ {j} ( t)  < 0 \ \
 +
[  \mathop{\rm Re}  \lambda _ {j} ( t)  > 0] \ \
 +
\textrm{ for }  t \in I,
 +
$$
  
then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582071.png" /> is a vector function of boundary-layer type for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582072.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582073.png" />), that is, it is noticeably different from zero only in an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582074.png" />-neighbourhood of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582075.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582076.png" />). If, however, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582077.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582078.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582079.png" /> strongly oscillates as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582080.png" /> and has order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582081.png" /> on the whole interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582082.png" />.
+
then $  x _ {j} $
 +
is a vector function of boundary-layer type for $  t _ {0} = 0 $(
 +
$  t _ {0} = T $),  
 +
that is, it is noticeably different from zero only in an $  \epsilon $-
 +
neighbourhood of $  t = 0 $(
 +
$  t = T $).  
 +
If, however, $  \mathop{\rm Re}  \lambda _ {j} ( t) \equiv 0 $,  
 +
$  t \in I $,  
 +
then $  x _ {j} $
 +
strongly oscillates as $  \epsilon \rightarrow + 0 $
 +
and has order $  O ( 1) $
 +
on the whole interval $  I $.
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582083.png" /> is a holomorphic matrix function for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582084.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582085.png" /> and condition 1) is satisfied, then (3) is valid for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582086.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582087.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582088.png" /> is sufficiently small. A difficult problem is the construction of asymptotics for fundamental systems of solutions in the presence of turning points on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582089.png" />, that is, points at which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582090.png" /> has multiple eigenvalues. This problem has been completely solved only for special types of turning points (see [[#References|[1]]]). In a neighbourhood of a turning point there is a domain of transition in which the solution is rather complicated and in the simplest case is expressed by an Airy function (cf. [[Airy functions|Airy functions]]).
+
If $  A ( t, \epsilon ) $
 +
is a holomorphic matrix function for $  | t | \leq  t _ {0} $,  
 +
$  | \epsilon | \leq  \epsilon _ {0} $
 +
and condition 1) is satisfied, then (3) is valid for $  \epsilon \rightarrow + 0 $,  
 +
0 \leq  t \leq  t _ {1} $,  
 +
where $  t _ {1} > 0 $
 +
is sufficiently small. A difficult problem is the construction of asymptotics for fundamental systems of solutions in the presence of turning points on $  I $,  
 +
that is, points at which $  A ( t, 0) $
 +
has multiple eigenvalues. This problem has been completely solved only for special types of turning points (see [[#References|[1]]]). In a neighbourhood of a turning point there is a domain of transition in which the solution is rather complicated and in the simplest case is expressed by an Airy function (cf. [[Airy functions|Airy functions]]).
  
 
Similar results (see [[#References|[1]]], [[#References|[17]]]) are valid for scalar equations of the form
 
Similar results (see [[#References|[1]]], [[#References|[17]]]) are valid for scalar equations of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582091.png" /></td> </tr></table>
+
$$
 +
\epsilon  ^ {n}
 +
x  ^ {(} n) +
 +
\sum _ {j = 0 } ^ { {n }  - 1 }
 +
\epsilon  ^ {j} a _ {j} ( t, \epsilon )
 +
x  ^ {(} j)  = 0,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582092.png" /> is a complex-valued function; the roles of the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582093.png" /> are played by the roots of the characteristic equation
+
where $  a _ {j} $
 +
is a complex-valued function; the roles of the functions $  \lambda _ {j} ( t, \epsilon ) $
 +
are played by the roots of the characteristic equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582094.png" /></td> </tr></table>
+
$$
 +
\lambda  ^ {n} +
 +
\sum _ {j = 0 } ^ { {n }  - 1 }
 +
\lambda  ^ {j} a _ {j} ( t, \epsilon )  = 0.
 +
$$
  
 
WKB solutions also arise in non-linear systems of the form
 
WKB solutions also arise in non-linear systems of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582095.png" /></td> </tr></table>
+
$$
 +
\epsilon \dot{x}  = \
 +
f ( t, x, \epsilon ),\ \
 +
x \in \mathbf R  ^ {n} .
 +
$$
  
The WKB asymptotic expansion (3), under the conditions of Birkhoff's theorem, is valid in an infinite interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582096.png" /> (that is, (3) is asymptotic both as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582097.png" /> and as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582098.png" />) if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s08582099.png" /> is sufficiently well behaved as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820100.png" />, for example, if it rapidly converges to a constant matrix with distinct eigenvalues (see [[#References|[2]]]). Many questions of spectral analysis (see [[#References|[3]]]) and mathematical physics reduce to singular problems with a small parameter.
+
The WKB asymptotic expansion (3), under the conditions of Birkhoff's theorem, is valid in an infinite interval $  0 \leq  t < \infty $(
 +
that is, (3) is asymptotic both as $  \epsilon \rightarrow 0 $
 +
and as $  t \rightarrow \infty $)  
 +
if $  A ( t, \epsilon ) $
 +
is sufficiently well behaved as $  t \rightarrow + \infty $,  
 +
for example, if it rapidly converges to a constant matrix with distinct eigenvalues (see [[#References|[2]]]). Many questions of spectral analysis (see [[#References|[3]]]) and mathematical physics reduce to singular problems with a small parameter.
  
 
3) Of particular interest is the investigation of non-linear systems of the form
 
3) Of particular interest is the investigation of non-linear systems of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820101.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
\left .
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820102.png" /> is a small parameter. The first equation describes fast motions, the second slow motions. For example, the [[Van der Pol equation|van der Pol equation]] reduces by the substitution
+
\begin{array}{c}
 +
\epsilon \dot{x}  = f ( x, y),\  x ( 0)  =  x _ {0} ,\ \
 +
x \in \mathbf R  ^ {n} ,  \\
 +
\dot{y}  = g( x, y),\  y ( 0= y _ {0} ,\ \
 +
y \in \mathbf R  ^ {m} , \\
 +
\end{array}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820103.png" /></td> </tr></table>
+
\right \}
 +
$$
  
for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820104.png" /> large, to the system
+
where  $  \epsilon > 0 $
 +
is a small parameter. The first equation describes fast motions, the second slow motions. For example, the [[Van der Pol equation|van der Pol equation]] reduces by the substitution
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820105.png" /></td> </tr></table>
+
$$
 +
= \
 +
\int\limits _ { 0 } ^ { x }
 +
( x  ^ {2} - 1) \
 +
dx + {
 +
\frac{1} \lambda
 +
}
 +
 
 +
\frac{dx }{dt }
 +
,\ \
 +
t _ {1}  = \
 +
{
 +
\frac{t} \lambda
 +
} ,\ \
 +
\epsilon  = \
 +
{
 +
\frac{1}{\lambda  ^ {2} }
 +
} ,
 +
$$
 +
 
 +
for  $  \lambda $
 +
large, to the system
 +
 
 +
$$
 +
\epsilon \dot{x}  = \
 +
y - {
 +
\frac{1}{3}
 +
} x  ^ {3} + x,\ \
 +
\dot{y}  = - x,
 +
$$
  
 
which is of the form (4).
 
which is of the form (4).
  
For <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820106.png" /> the equation of fast motion degenerates to the equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820107.png" />. In some closed bounded domain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820108.png" /> of the variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820109.png" />, let this equation have an isolated stable continuous root <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820110.png" /> (that is, the real parts of the eigenvalues of the Jacobi matrix <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820111.png" /> are negative for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820112.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820113.png" />); suppose that solutions of (4) and of the degenerate problem
+
For $  \epsilon = 0 $
 +
the equation of fast motion degenerates to the equation $  f ( x, y) = 0 $.  
 +
In some closed bounded domain $  D $
 +
of the variable $  y $,  
 +
let this equation have an isolated stable continuous root $  x = \phi ( y) $(
 +
that is, the real parts of the eigenvalues of the Jacobi matrix $  \partial  f/ \partial  x $
 +
are negative for $  x = \phi ( y) $,  
 +
$  y \in D $);  
 +
suppose that solutions of (4) and of the degenerate problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820114.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
= \phi ( y),\ \
 +
\dot{y}  = \
 +
g ( x, y),\ \
 +
y ( 0)  = y _ {0} ,
 +
$$
  
exist and are unique for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820115.png" />, and let for the function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820116.png" />, obtained as the solution of (5), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820117.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820118.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820119.png" /> is in the domain of influence of the root <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820120.png" />, then
+
exist and are unique for $  t \in I $,  
 +
and let for the function $  \overline{y}\; $,  
 +
obtained as the solution of (5), $  \overline{y}\; ( t) \in D $
 +
for $  t \in I $.  
 +
If $  ( x _ {0} , y _ {0} ) $
 +
is in the domain of influence of the root $  x = \phi ( y) $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820121.png" /></td> </tr></table>
+
$$
 +
x ( t, \epsilon )  \rightarrow  \overline{x}\; ( t),\ \
 +
0 < t \leq  T,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820122.png" /></td> </tr></table>
+
$$
 +
y ( t, \epsilon )  \rightarrow  \overline{y}\; ( t),\  0 \leq  t \leq  T,
 +
$$
  
as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820123.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820124.png" /> is the solution of the degenerate problem (Tikhonov's theorem). Close to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820125.png" /> the limit transition <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820126.png" /> is non-uniform — a [[Boundary layer|boundary layer]] occurs. For problem (4) there is the following asymptotic expansion for the solution:
+
as $  \epsilon \rightarrow 0 $,  
 +
where $  ( \overline{x}\; , \overline{y}\; ) $
 +
is the solution of the degenerate problem (Tikhonov's theorem). Close to $  t = 0 $
 +
the limit transition $  x ( t, \epsilon ) \rightarrow \overline{x}\; ( t) $
 +
is non-uniform — a [[Boundary layer|boundary layer]] occurs. For problem (4) there is the following asymptotic expansion for the solution:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820127.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
x ( t, \epsilon )  \sim \
 +
\sum _ {k = 0 } ^  \infty 
 +
\epsilon  ^ {k} x _ {k} ( t) +
 +
\sum _ {k = 0 } ^  \infty 
 +
\epsilon  ^ {k}
 +
\Pi _ {k} \left (
 +
{
 +
\frac{t} \epsilon
 +
}
 +
\right ) ,
 +
$$
  
and the asymptotic expansion for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820128.png" /> has a similar form. In (6) the first sum is the regular part and the second sum is the boundary layer. The regular part of the asymptotic expansion is calculated by standard means: series of the form (2) are substituted into (4), the right-hand sides are expanded as power series in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820129.png" /> and the coefficients at equal powers of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820130.png" /> are equated. For the calculation of the boundary-layer part of the asymptotic expansion one introduces a new variable <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820131.png" /> (the fast time) in a neighbourhood of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820132.png" /> and applies the above procedure. There is an interval on the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820133.png" />-axis on which both the regular (or outer) expansion and the boundary-layer (or inner) expansion are useful. The functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820134.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820135.png" /> are defined by the coincidence of these expansions (the so-called method of matching, see [[#References|[4]]], [[#References|[5]]]).
+
and the asymptotic expansion for $  y ( t, \epsilon ) $
 +
has a similar form. In (6) the first sum is the regular part and the second sum is the boundary layer. The regular part of the asymptotic expansion is calculated by standard means: series of the form (2) are substituted into (4), the right-hand sides are expanded as power series in $  \epsilon $
 +
and the coefficients at equal powers of $  \epsilon $
 +
are equated. For the calculation of the boundary-layer part of the asymptotic expansion one introduces a new variable $  \tau = t/ \epsilon $(
 +
the fast time) in a neighbourhood of $  t = 0 $
 +
and applies the above procedure. There is an interval on the $  t $-
 +
axis on which both the regular (or outer) expansion and the boundary-layer (or inner) expansion are useful. The functions $  x _ {k} $,  
 +
$  \Pi _ {k} $
 +
are defined by the coincidence of these expansions (the so-called method of matching, see [[#References|[4]]], [[#References|[5]]]).
  
Similar results hold when the right-hand side of (4) depends explicitly on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820136.png" />, for scalar equations of the form
+
Similar results hold when the right-hand side of (4) depends explicitly on $  t $,  
 +
for scalar equations of the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820137.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$ \tag{7 }
 +
\epsilon x  ^ {(} n)  = \
 +
f ( t, x, \dot{x} \dots
 +
x ^ {( n - 1) } )
 +
$$
  
 
and for boundary value problems for such systems and equations (see [[Differential equations with small parameter|Differential equations with small parameter]], [[#References|[6]]], [[#References|[7]]]).
 
and for boundary value problems for such systems and equations (see [[Differential equations with small parameter|Differential equations with small parameter]], [[#References|[6]]], [[#References|[7]]]).
  
For approximation of the solution of (4) at a break point, where stability is lost (for example, where one of the eigenvalues of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820138.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820139.png" /> is zero), series of the form (5) lose their asymptotic character. In a neighbourhood of a break point the asymptotic expansion has quite a different character (see [[#References|[8]]]). The investigation of a neighbourhood of a break point is particularly essential for the construction of the asymptotic theory of relaxation oscillations (cf. [[Relaxation oscillation|Relaxation oscillation]]).
+
For approximation of the solution of (4) at a break point, where stability is lost (for example, where one of the eigenvalues of $  \partial  f/ \partial  x $
 +
for $  x = \phi ( y) $
 +
is zero), series of the form (5) lose their asymptotic character. In a neighbourhood of a break point the asymptotic expansion has quite a different character (see [[#References|[8]]]). The investigation of a neighbourhood of a break point is particularly essential for the construction of the asymptotic theory of relaxation oscillations (cf. [[Relaxation oscillation|Relaxation oscillation]]).
  
4) Problems in celestial mechanics and non-linear oscillation theory lead, in particular, to the necessity of investigating the behaviour of solutions of (1) not in a finite interval but in a large <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820140.png" />-interval of the order of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820141.png" /> or higher. For these problems a method of averaging is widely applied (see [[Krylov–Bogolyubov method of averaging|Krylov–Bogolyubov method of averaging]]; [[Small denominators|Small denominators]], [[#References|[9]]]–[[#References|[11]]]).
+
4) Problems in celestial mechanics and non-linear oscillation theory lead, in particular, to the necessity of investigating the behaviour of solutions of (1) not in a finite interval but in a large $  t $-
 +
interval of the order of $  \epsilon  ^ {-} 1 $
 +
or higher. For these problems a method of averaging is widely applied (see [[Krylov–Bogolyubov method of averaging|Krylov–Bogolyubov method of averaging]]; [[Small denominators|Small denominators]], [[#References|[9]]]–[[#References|[11]]]).
  
 
5) Asymptotic behaviour of solutions of equations of the form (7) has been investigated, in particular, with the help of the so-called method of multiple scales (see [[#References|[4]]], [[#References|[5]]]); this method is a generalization of the WKB method. An example of the method has been given using the scalar equation
 
5) Asymptotic behaviour of solutions of equations of the form (7) has been investigated, in particular, with the help of the so-called method of multiple scales (see [[#References|[4]]], [[#References|[5]]]); this method is a generalization of the WKB method. An example of the method has been given using the scalar equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820142.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
$$ \tag{8 }
 +
\epsilon  ^ {2} x  ^ {2} +
 +
f ( t, x)  = 0,
 +
$$
  
 
which has a periodic solution (see [[#References|[12]]]). The solution is sought for in the form
 
which has a periodic solution (see [[#References|[12]]]). The solution is sought for in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820143.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
$$ \tag{9 }
 +
= \phi ( T, t, \epsilon )  \sim \
 +
\sum _ {j = 0 } ^  \infty 
 +
\epsilon  ^ {j}
 +
\phi _ {j} ( T, t),\ \
 +
T = {
 +
\frac{s ( t) } \epsilon
 +
} .
 +
$$
 +
 
 +
(The functions  $  T, t $
 +
are called the scales.) If (8) is linear, then  $  \phi _ {j} = e  ^ {T} \psi _ {j} ( t) $
 +
and (9) is a WKB solution. In the non-linear case the equations of the first two approximations take the form
 +
 
 +
$$
 +
\dot{S}  ^ {2}
 +
 
 +
\frac{\partial  ^ {2} \phi _ {0} }{\partial  T  ^ {2} }
 +
+
 +
f ( t, \phi _ {0} ) =  0,
 +
$$
  
(The functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820144.png" /> are called the scales.) If (8) is linear, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820145.png" /> and (9) is a WKB solution. In the non-linear case the equations of the first two approximations take the form
+
$$
 +
\dot{S}  ^ {2}
 +
\frac{\partial  ^ {2} \phi _ {1} }{
 +
\partial  T  ^ {2} }
 +
+
 +
\frac{\partial  f ( t, \phi _ {0} ) }{\partial  x }
 +
\phi _ {1}  = - 2 \dot{S}
 +
\frac{\partial
 +
^ {2} \phi _ {0} }{\partial  t \partial  T }
 +
-
 +
\dot{S}
 +
\frac{\partial  \phi _ {0} }{\partial  T }
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820146.png" /></td> </tr></table>
+
where the first equation contains two unknown functions  $  S $
 +
and  $  \phi _ {0} $.
 +
Let this equation have a solution  $  \phi _ {0} = \phi _ {0} ( t, T) $
 +
periodic in  $  t $.  
 +
Then the missing equation, from which  $  S $
 +
is to be determined, is found from the periodicity in  $  t $
 +
of  $  \phi _ {1} $
 +
and has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820147.png" /></td> </tr></table>
+
$$
 +
\dot{S} \oint
 +
\left (
  
where the first equation contains two unknown functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820148.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820149.png" />. Let this equation have a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820150.png" /> periodic in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820151.png" />. Then the missing equation, from which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820152.png" /> is to be determined, is found from the periodicity in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820153.png" /> of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820154.png" /> and has the form
+
\frac{\partial  \phi _ {0} ( t, T) }{\partial  T }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820155.png" /></td> </tr></table>
+
\right )  ^ {2}  dT  = \
 +
E  \equiv  \textrm{ const } ,
 +
$$
  
where the integral is taken over a period of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820156.png" />.
+
where the integral is taken over a period of $  \phi $.
  
 
====References====
 
====References====
Line 136: Line 435:
  
 
==2. The method of the small parameter for partial differential equations.==
 
==2. The method of the small parameter for partial differential equations.==
As for ordinary differential equations, solutions of partial differential equations can regularly or singularly depend on a small parameter <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820157.png" /> (it is assumed that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820158.png" />). Roughly speaking, regular dependence is observed when the leading terms of the differential operator do not depend on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820159.png" />, and the minor terms are smooth functions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820160.png" /> for small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820161.png" />. The solution is then also a smooth function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820162.png" />. But if any of the leading terms vanish as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820163.png" />, then the solution, as a rule, depends singularly on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820164.png" />. In this case one often speaks of partial differential equations "with a small parameter in front of the leading derivatives" . Such a classification is somewhat arbitrary, since the choice of leading terms is not always obvious; also, the parameter itself may also occur in the boundary conditions. In addition, singularities may arise in unbounded domains, even when the small parameter only occurs in the minor derivatives (at infinity they play in a sense an equal, or even more major, role as the leading terms).
+
As for ordinary differential equations, solutions of partial differential equations can regularly or singularly depend on a small parameter $  \epsilon $(
 +
it is assumed that $  \epsilon > 0 $).  
 +
Roughly speaking, regular dependence is observed when the leading terms of the differential operator do not depend on $  \epsilon $,  
 +
and the minor terms are smooth functions of $  \epsilon $
 +
for small $  \epsilon $.  
 +
The solution is then also a smooth function of $  \epsilon $.  
 +
But if any of the leading terms vanish as $  \epsilon \rightarrow 0 $,  
 +
then the solution, as a rule, depends singularly on $  \epsilon $.  
 +
In this case one often speaks of partial differential equations "with a small parameter in front of the leading derivatives" . Such a classification is somewhat arbitrary, since the choice of leading terms is not always obvious; also, the parameter itself may also occur in the boundary conditions. In addition, singularities may arise in unbounded domains, even when the small parameter only occurs in the minor derivatives (at infinity they play in a sense an equal, or even more major, role as the leading terms).
  
For example, consider a second-order elliptic partial differential equation in a bounded domain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820165.png" />. The solution of the problem
+
For example, consider a second-order elliptic partial differential equation in a bounded domain $  G \subset  \mathbf R  ^ {n} $.  
 +
The solution of the problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820166.png" /></td> </tr></table>
+
$$
 +
\Delta u +
 +
\sum _ {j = 1 } ^ { n }
 +
a _ {j} ( x, \epsilon )
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820167.png" /></td> </tr></table>
+
\frac{\partial  u }{\partial  x _ {j} }
 +
+
 +
b ( x, \epsilon ) u  = \
 +
f ( x, \epsilon ),
 +
$$
  
is a smooth function of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820168.png" />, for small <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820169.png" />, if the boundary is smooth, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820170.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820171.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820172.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820173.png" /> are smooth functions of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820174.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820175.png" />, and if the limit boundary value problem
+
$$
 +
u \mid  _ {dG}  = \phi ( x, \epsilon ) ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820176.png" /></td> </tr></table>
+
is a smooth function of  $  \epsilon $,
 +
for small  $  \epsilon $,
 +
if the boundary is smooth, if  $  a _ {j} $,
 +
$  b $,
 +
$  f $,
 +
$  \phi $
 +
are smooth functions of  $  x $
 +
and  $  \epsilon $,
 +
and if the limit boundary value problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820177.png" /></td> </tr></table>
+
$$
 +
\Delta u +
 +
\sum _ {j = 1 } ^ { n }
 +
a _ {j} ( x , 0)
  
is uniquely solvable for any smooth functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820178.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820179.png" />. The solution can be expanded in an asymptotic series in powers of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820180.png" />:
+
\frac{\partial  u }{\partial  x _ {j} }
 +
+
 +
b ( x, 0) u  = \
 +
f ( x, 0),
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820181.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
$$
 +
u \mid  _ {\partial  G }  = \phi ( x, 0) ,
 +
$$
  
whose coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820182.png" /> are solutions of the same type of boundary value problem and are easily calculated by [[Perturbation theory|perturbation theory]].
+
is uniquely solvable for any smooth functions  $  f ( x, 0) $,
 +
$  \phi ( x, 0) $.
 +
The solution can be expanded in an asymptotic series in powers of  $  \epsilon $:
 +
 
 +
$$ \tag{1 }
 +
u ( x, \epsilon )  \sim \
 +
\sum _ {k = 0 } ^  \infty 
 +
\epsilon  ^ {k} u _ {k} ( x),
 +
$$
 +
 
 +
whose coefficients $  u _ {k} ( x) $
 +
are solutions of the same type of boundary value problem and are easily calculated by [[Perturbation theory|perturbation theory]].
  
 
Quite a different situation holds, for example, for the boundary value problem
 
Quite a different situation holds, for example, for the boundary value problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820183.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$ \tag{2 }
 +
\left .
 +
 
 +
\begin{array}{c}
 +
L _  \epsilon  u  \equiv \
 +
\epsilon  ^ {2} \Delta u +
 +
\sum _ {j = 1 } ^ { n }
 +
a _ {j} ( x)
 +
 
 +
\frac{\partial  u }{\partial  x _ {j} }
 +
+
 +
b ( x) u  = f ( x),  \\
 +
u \mid  _ {\partial  G }  = \phi ( x),  \\
 +
\end{array}
 +
 
 +
\right \}
 +
$$
 +
 
 +
since for  $  \epsilon = 0 $
 +
the order of the equation is less. The limit problem has the form
 +
 
 +
$$ \tag{3 }
 +
\left .
 +
 
 +
\begin{array}{c}
 +
L _ {0} u  \equiv \
 +
\sum _ {j = 1 } ^ { n }
 +
a _ {j} ( x)
  
since for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820184.png" /> the order of the equation is less. The limit problem has the form
+
\frac{\partial  u }{\partial  x _ {j} }
 +
+
 +
b ( x) u  = f ( x),  \\
 +
u \mid  _ {\partial  G }  = \phi ( x),  \\
 +
\end{array}
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820185.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
\right \}
 +
$$
  
and, in general, is unsolvable. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820186.png" />, let the characteristics of the limit equation have the form depicted in the figure and let their orientation be induced by the vector field <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820187.png" />.
+
and, in general, is unsolvable. Let $  n = 2 $,  
 +
let the characteristics of the limit equation have the form depicted in the figure and let their orientation be induced by the vector field $  \{ a _ {j} ( x) \} $.
  
 
<img style="border:1px solid;" src="https://www.encyclopediaofmath.org/legacyimages/common_img/s085820a.gif" />
 
<img style="border:1px solid;" src="https://www.encyclopediaofmath.org/legacyimages/common_img/s085820a.gif" />
Line 170: Line 548:
 
Figure: s085820a
 
Figure: s085820a
  
If the solution of the limit equation is known at some point, then it is known along all characteristics passing through the point; therefore the boundary value problem (3) is unsolvable for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820188.png" /> arbitrary. As <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820189.png" />, the solution of (2) converges to a solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820190.png" /> of the limit equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820191.png" /> which is equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820192.png" /> on the segments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820193.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820194.png" />. On the remainder of the boundary the boundary conditions are lost. In a neighbourhood of each of the segments <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820195.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820196.png" />, having the typical width <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820197.png" /> and called a [[Boundary layer|boundary layer]], the solution of (2) is close to the sum
+
If the solution of the limit equation is known at some point, then it is known along all characteristics passing through the point; therefore the boundary value problem (3) is unsolvable for $  \phi ( x) $
 +
arbitrary. As $  \epsilon \rightarrow 0 $,  
 +
the solution of (2) converges to a solution $  u _ {0} ( x) $
 +
of the limit equation $  L _ {0} u _ {0} = f $
 +
which is equal to $  \phi ( x) $
 +
on the segments $  AB $
 +
and $  CD $.  
 +
On the remainder of the boundary the boundary conditions are lost. In a neighbourhood of each of the segments $  AB $
 +
and $  BC $,  
 +
having the typical width $  \epsilon  ^ {2} $
 +
and called a [[Boundary layer|boundary layer]], the solution of (2) is close to the sum
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820198.png" /></td> </tr></table>
+
$$
 +
u _ {0} ( x) +
 +
v _ {0} ( \rho \epsilon  ^ {-} 2 , x  ^  \prime  ).
 +
$$
  
Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820199.png" /> is the coordinate along the boundary <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820200.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820201.png" />), <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820202.png" /> is the distance from the boundary along the normal, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820203.png" /> is the so-called inner variable. The solution of (2) expands as an asymptotic series of the form (1) everywhere except on the boundary layer and some special characteristic (in the figure, this is <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820204.png" />). The partial sums of the asymptotic series uniformly approximate the solution of (2) in the domain obtained from <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820205.png" /> by removing fixed neighbourhoods of the lines <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820206.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820207.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820208.png" />. In the boundary layer, outside a neighbourhood of the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820209.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820210.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820211.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820212.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820213.png" />, to the asymptotic series (1) one adds the asymptotic series
+
Here $  x  ^  \prime  $
 +
is the coordinate along the boundary $  AD $(
 +
$  BC $),  
 +
$  \rho $
 +
is the distance from the boundary along the normal, and $  \rho \epsilon  ^ {-} 2 $
 +
is the so-called inner variable. The solution of (2) expands as an asymptotic series of the form (1) everywhere except on the boundary layer and some special characteristic (in the figure, this is $  CC  ^  \prime  $).  
 +
The partial sums of the asymptotic series uniformly approximate the solution of (2) in the domain obtained from $  G $
 +
by removing fixed neighbourhoods of the lines $  AD $,  
 +
$  BC $
 +
and $  CC  ^  \prime  $.  
 +
In the boundary layer, outside a neighbourhood of the points $  A $,  
 +
$  B $,  
 +
$  C $,  
 +
$  D $,  
 +
$  C  ^  \prime  $,  
 +
to the asymptotic series (1) one adds the asymptotic series
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820214.png" /></td> </tr></table>
+
$$
 +
\sum _ {k = 0 } ^  \infty 
 +
\epsilon  ^ {2k}
 +
v _ {2k} ( \rho \epsilon  ^ {-} 2 , x  ^  \prime  ).
 +
$$
  
The functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820215.png" /> decrease exponentially as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820216.png" />. The first asymptotic series is usually called the outer asymptotic series and the second the inner asymptotic series, and the functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820217.png" /> are called boundary-layer functions. This terminology, as even the problem itself, comes from the problem of fluid flow around bodies with small viscosity (see [[Hydrodynamics, mathematical problems in|Hydrodynamics, mathematical problems in]], and also [[#References|[1]]]–[[#References|[4]]]). This method is called the method of the boundary layer and is essentially the same as the method for ordinary differential equations.
+
The functions $  v _ {2k} ( \xi , x  ^  \prime  ) $
 +
decrease exponentially as $  \xi \rightarrow \infty $.  
 +
The first asymptotic series is usually called the outer asymptotic series and the second the inner asymptotic series, and the functions $  v _ {2k} ( \xi , x  ^  \prime  ) $
 +
are called boundary-layer functions. This terminology, as even the problem itself, comes from the problem of fluid flow around bodies with small viscosity (see [[Hydrodynamics, mathematical problems in|Hydrodynamics, mathematical problems in]], and also [[#References|[1]]]–[[#References|[4]]]). This method is called the method of the boundary layer and is essentially the same as the method for ordinary differential equations.
  
In neighbourhoods of the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820218.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820219.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820220.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820221.png" />, at which the characteristics of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820222.png" /> touch the boundary, and close to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820223.png" />, the asymptotic behaviour of the solution is more complicated. The complications arise when the boundary is not everywhere smooth (has corners and, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820224.png" />, edges). In some simple cases it is possible to construct asymptotic formulas by the addition of supplementary boundary-layer functions depending on more variables, but, as before, tending exponentially to zero at infinity. However, as a rule the picture is more complicated: both the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820225.png" /> of the outer expansion and the coefficients <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820226.png" /> of the inner expansion have essential singularities at the singular points (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820227.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820228.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820229.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820230.png" /> in the figure). The asymptotic expansion of the solution, uniformly in the closed domain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820231.png" />, can be constructed by the method of multiple scales (by the method of matched asymptotic series, [[#References|[5]]]). Some problems for partial differential equations may be investigated by another variant of the method of multiple scales (the method of ascent): the solution is considered as a function of the basic independent variables and auxiliary "fast" variables. As a result the dimension of the original problem is increased but the dependence on the parameter is simplified (see [[#References|[6]]]).
+
In neighbourhoods of the points $  A $,  
 +
$  B $,  
 +
$  C $,  
 +
$  D $,  
 +
at which the characteristics of $  L _ {0} $
 +
touch the boundary, and close to $  CC  ^  \prime  $,  
 +
the asymptotic behaviour of the solution is more complicated. The complications arise when the boundary is not everywhere smooth (has corners and, for $  n > 2 $,  
 +
edges). In some simple cases it is possible to construct asymptotic formulas by the addition of supplementary boundary-layer functions depending on more variables, but, as before, tending exponentially to zero at infinity. However, as a rule the picture is more complicated: both the coefficients $  u _ {k} ( x) $
 +
of the outer expansion and the coefficients $  v _ {k} ( \xi , x  ^  \prime  ) $
 +
of the inner expansion have essential singularities at the singular points ( $  A $,  
 +
$  B $,  
 +
$  C $,  
 +
$  D $
 +
in the figure). The asymptotic expansion of the solution, uniformly in the closed domain $  \overline{G}\; $,  
 +
can be constructed by the method of multiple scales (by the method of matched asymptotic series, [[#References|[5]]]). Some problems for partial differential equations may be investigated by another variant of the method of multiple scales (the method of ascent): the solution is considered as a function of the basic independent variables and auxiliary "fast" variables. As a result the dimension of the original problem is increased but the dependence on the parameter is simplified (see [[#References|[6]]]).
  
If the field of characteristics of the limit operator <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820232.png" /> has stationary points, then the problem significantly complicates. For example, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820233.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820234.png" />, and if all the characteristics are directed into the domain, then the solution of (2) tends to a constant as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820235.png" />. Finding this constant and constructing an asymptotic series of the solution is a difficult, only partly solved problem (see [[#References|[7]]]). Equation (2) describes random perturbations of the dynamical system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820236.png" />. Problems in this area were also at the origin of the development of the method of the small parameter in the theory of partial differential equations (see [[#References|[8]]]).
+
If the field of characteristics of the limit operator $  L _ {0} $
 +
has stationary points, then the problem significantly complicates. For example, if $  f ( x) \equiv 0 $,  
 +
$  b ( x) \equiv 0 $,  
 +
and if all the characteristics are directed into the domain, then the solution of (2) tends to a constant as $  \epsilon \rightarrow 0 $.  
 +
Finding this constant and constructing an asymptotic series of the solution is a difficult, only partly solved problem (see [[#References|[7]]]). Equation (2) describes random perturbations of the dynamical system $  \dot{x} = a ( x) $.  
 +
Problems in this area were also at the origin of the development of the method of the small parameter in the theory of partial differential equations (see [[#References|[8]]]).
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820237.png" /> in (2) and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820238.png" />, then the asymptotic series is easily found; far from the boundary the series has the form (1) and in the boundary layer, close to the boundary, the asymptotic series
+
If $  a _ {j} ( x) \equiv 0 $
 +
in (2) and $  b ( x) < 0 $,  
 +
then the asymptotic series is easily found; far from the boundary the series has the form (1) and in the boundary layer, close to the boundary, the asymptotic series
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820239.png" /></td> </tr></table>
+
$$
 +
\sum _ {k = 0 } ^  \infty 
 +
\epsilon  ^ {k}
 +
v _ {k} ( \xi , x  ^  \prime  )
 +
$$
  
is added, where now <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820240.png" />.
+
is added, where now $  \xi = \rho \epsilon  ^ {-} 1 $.
  
The problem is really complicated if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820241.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820242.png" />. In this case the solution strongly oscillates; the asymptotic methods are the [[WKB method|WKB method]]; [[Semi-classical approximation|semi-classical approximation]]; the [[Parabolic-equation method|parabolic-equation method]], etc.
+
The problem is really complicated if $  a _ {j} ( x) \equiv 0 $,
 +
$  b ( x) > 0 $.  
 +
In this case the solution strongly oscillates; the asymptotic methods are the [[WKB method|WKB method]]; [[Semi-classical approximation|semi-classical approximation]]; the [[Parabolic-equation method|parabolic-equation method]], etc.
  
There is a class of problems in which the boundary of the domain degenerates as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820243.png" />. For the sake of being specific, consider the problem
+
There is a class of problems in which the boundary of the domain degenerates as $  \epsilon \rightarrow 0 $.  
 +
For the sake of being specific, consider the problem
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820244.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
( \Delta + k  ^ {2} ) u ( x)  = 0,\ \
 +
x \in G _  \epsilon  ,\ \
 +
u \mid  _ {\partial  G _  \epsilon  }  = \phi ( x),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820245.png" /> is the exterior of a bounded domain <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820246.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820247.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820248.png" />; at infinity the [[Radiation conditions|radiation conditions]] are posed. For example, let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820249.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820250.png" /> being a fixed domain containing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820251.png" />; then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820252.png" /> contracts to the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820253.png" /> and (4) has no limit. The quantity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820254.png" /> has the sense of wavelength: here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820255.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820256.png" /> is the diameter of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820257.png" />, and one speaks of dispersion of wavelengths on the obstacle <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820258.png" /> (or [[Hydrodynamic approximation|hydrodynamic approximation]], or Rayleigh approximation). There are two overlapping zones: the nearer containing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820259.png" />, with size tending to zero as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820260.png" />, and the farther, the exterior of the domain, contracting to the point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820261.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820262.png" />. The asymptotic series of the solution has different forms in these zones. The first boundary value problem for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820263.png" /> turns out to be the most difficult; the inner asymptotic expansion has the form
+
where $  G _  \epsilon  $
 +
is the exterior of a bounded domain $  D _  \epsilon  $
 +
in $  \mathbf R  ^ {n} $
 +
and  $  k > 0 $;  
 +
at infinity the [[Radiation conditions|radiation conditions]] are posed. For example, let $  D _  \epsilon  = \epsilon D $,  
 +
$  D $
 +
being a fixed domain containing $  x = 0 $;  
 +
then $  D _  \epsilon  $
 +
contracts to the point $  x = 0 $
 +
and (4) has no limit. The quantity $  \lambda = 2 \pi /k $
 +
has the sense of wavelength: here $  \lambda \gg d _  \epsilon  $,  
 +
where $  d _  \epsilon  $
 +
is the diameter of $  D _  \epsilon  $,  
 +
and one speaks of dispersion of wavelengths on the obstacle $  D _  \epsilon  $(
 +
or [[Hydrodynamic approximation|hydrodynamic approximation]], or Rayleigh approximation). There are two overlapping zones: the nearer containing $  \partial  G _  \epsilon  $,  
 +
with size tending to zero as $  \epsilon \rightarrow 0 $,  
 +
and the farther, the exterior of the domain, contracting to the point $  x = 0 $
 +
as $  \epsilon \rightarrow 0 $.  
 +
The asymptotic series of the solution has different forms in these zones. The first boundary value problem for $  n = 2 $
 +
turns out to be the most difficult; the inner asymptotic expansion has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820264.png" /></td> </tr></table>
+
$$
 +
u ( x, \epsilon )  \sim \
 +
\sum _ {k = 0 } ^  \infty 
 +
\sum _ {l = 0 } ^ { N }
 +
\epsilon  ^ {k} \mu  ^ {l}
 +
v _ {kl} ( \xi ),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820265.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820266.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820267.png" /> is a constant (see [[#References|[9]]]). The long-wave approximation has been studied mainly for the Helmholtz equation and for the Maxwell system (see [[#References|[10]]], [[#References|[11]]]).
+
where $  \xi = \epsilon  ^ {-} 1 x $,  
 +
$  \mu = (  \mathop{\rm ln}  \epsilon + \alpha )  ^ {-} 1 $
 +
and $  \alpha $
 +
is a constant (see [[#References|[9]]]). The long-wave approximation has been studied mainly for the Helmholtz equation and for the Maxwell system (see [[#References|[10]]], [[#References|[11]]]).
  
Another variant arises when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820268.png" /> contracts to an interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820269.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820270.png" />; in this case there is a limit problem for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820271.png" /> but not for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820272.png" />. Problems of this type (including those for the Laplace equation, for linear hyperbolic equations and for non-linear partial differential equations) arise in hydrodynamics and aerodynamics, in the theory of diffraction of waves (flow around a thin body of a fluid or a gas). The problem (4) has been investigated for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820273.png" /> (see [[#References|[12]]]); for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820274.png" /> it has been investigated if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820275.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820276.png" /> is a solid of revolution (see [[#References|[13]]]).
+
Another variant arises when $  D _  \epsilon  $
 +
contracts to an interval $  L $
 +
as $  \epsilon \rightarrow 0 $;  
 +
in this case there is a limit problem for $  n = 2 $
 +
but not for $  n > 2 $.  
 +
Problems of this type (including those for the Laplace equation, for linear hyperbolic equations and for non-linear partial differential equations) arise in hydrodynamics and aerodynamics, in the theory of diffraction of waves (flow around a thin body of a fluid or a gas). The problem (4) has been investigated for $  n = 2 $(
 +
see [[#References|[12]]]); for $  n = 3 $
 +
it has been investigated if $  k = 0 $
 +
and $  D _  \epsilon  $
 +
is a solid of revolution (see [[#References|[13]]]).
  
Partial differential equations containing a small parameter arise naturally in the study of non-linear oscillations when the perturbation has order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820277.png" /> but the solution is studied over a large time interval of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/s/s085/s085820/s085820278.png" />. If a continuous medium is considered instead of a system of particles, then partial differential equations arise to which generalizations of the averaging method apply (see [[#References|[14]]]).
+
Partial differential equations containing a small parameter arise naturally in the study of non-linear oscillations when the perturbation has order $  \epsilon $
 +
but the solution is studied over a large time interval of order $  \epsilon  ^ {-} 1 $.  
 +
If a continuous medium is considered instead of a system of particles, then partial differential equations arise to which generalizations of the averaging method apply (see [[#References|[14]]]).
  
 
====References====
 
====References====

Latest revision as of 14:55, 7 June 2020


in the theory of differential equations

A method for constructing approximate solutions of differential equations and systems depending on a parameter.

1. The method of the small parameter for ordinary differential equations.

Ordinary differential equations arising from applied problems usually contain one or more parameters. Parameters may also occur in the initial data or boundary conditions. Since an exact solution of a differential equation can only be found in very special isolated cases, the problem of constructing approximate solutions arises. A typical scenario is: the equation and the initial (boundary) conditions contain a parameter $ \lambda $ and the solution is known (or may be assumed known) for $ \lambda = \lambda _ {0} $; the requirement is to construct an approximate solution for values $ \lambda $ close to $ \lambda _ {0} $, that is, to construct an asymptotic solution as $ \epsilon \rightarrow 0 $, where $ \epsilon = \lambda - \lambda _ {0} $ is a "small" parameter. The method of the small parameter arises, e.g., in the three-body problem of celestial mechanics, which goes back to J. d'Alembert, and was intensively developed at the end of the 19th century.

The following notations are used below: $ t $ is an independent variable, $ \epsilon > 0 $ is a small parameter, $ I $ is an interval $ 0 \leq t \leq T $, and the sign $ \sim $ denotes asymptotic equality. All vector and matrix functions which appear in equations and boundary conditions are assumed to be smooth (of class $ C ^ \infty $) with respect to all variables in their domain (with respect to $ \epsilon $ for $ 0 \leq \epsilon \leq \epsilon _ {0} $ or $ | \epsilon | \leq \epsilon _ {0} $).

1) The Cauchy problem for an $ n $- th order system:

$$ \tag{1 } \dot{x} = \ f ( t, x, \epsilon ),\ \ x ( 0) = \ x _ {0} ( \epsilon ). $$

Let the solution $ \phi _ {0} ( t) $ of the limit problem (that is, (1) with $ \epsilon = 0 $) exist and be unique for $ t \in I $. Then there is an asymptotic expansion for the solution $ x ( t, \epsilon ) $ of (1) as $ \epsilon \rightarrow 0 $,

$$ \tag{2 } x ( t, \epsilon ) \sim \ \phi _ {0} ( t) + \sum _ {j = 1 } ^ \infty \epsilon ^ {j} \phi _ {j} ( t), $$

which holds uniformly with respect to $ t \in I $. This follows from the theorem on the smooth dependence on the parameter of the solution of a system of ordinary differential equations. If the vector functions $ f $ and $ x _ {0} $ are holomorphic for $ | \epsilon | \leq \epsilon _ {0} $, $ x = \phi _ {0} ( t) $, $ t \in I $, then the series in (2) converges to a solution $ x ( t, \epsilon ) $ for sufficiently small $ | \epsilon | $ uniformly relative to $ t \in I $( Poincaré's theorem). Similar results hold for boundary value problems for systems of the form (1), if the solution of the corresponding limit problem exists and is unique.

One distinguishes two forms of dependence of equations (or systems) on a small parameter — regular and singular. A system in normal form depends regularly on $ \epsilon $ if all its right-hand sides are smooth functions of $ \epsilon $ for small $ \epsilon \geq 0 $; otherwise the system depends singularly on $ \epsilon $. When the system depends regularly on $ \epsilon $, the solution of the problem with a parameter, as a rule, converges uniformly on a finite $ t $- interval as $ \epsilon \rightarrow 0 $ to a solution of the limit problem.

2) In the linear theory one considers $ n $- th order systems which depend singularly on $ \epsilon $:

$$ \epsilon \dot{x} = \ A ( t, \epsilon ) x + f ( t, \epsilon ), $$

where the entries of the $ ( n \times n) $- matrix $ A $ and the components of the vector $ f $ are complex-valued functions. The central problem in the linear theory is the construction of a fundamental system of solutions of the homogeneous system (that is, for $ f \equiv 0 $), the asymptotic behaviour of which as $ \epsilon \rightarrow 0 $ is known throughout the interval $ I $.

The basic result of the linear theory is the following theorem of Birkhoff. Let: 1) the eigenvalues $ \lambda _ {j} ( t, 0) $, $ 1 \leq j \leq n $, of $ A ( t, 0) $ be distinct for $ t \in I $; and 2) the quantities

$$ \mathop{\rm Re} ( \lambda _ {j} ( t, 0) - \lambda _ {k} ( t, 0)),\ \ 1 \leq j, k \leq n,\ \ j \neq k, $$

not change sign. Then there is a fundamental system of solutions $ x _ {1} ( t, \epsilon ) \dots x _ {n} ( t, \epsilon ) $ of the homogeneous system

$$ \epsilon \dot{x} = \ A ( t, \epsilon ) x $$

for which there is the following asymptotic expansion as $ \epsilon \rightarrow 0 $:

$$ \tag{3 } x _ {j} ( t, \epsilon ) \sim \ \mathop{\rm exp} \left [ \epsilon ^ {-} 1 \int\limits _ {t _ {0} } ^ { t } \lambda _ {j} ( \tau , \epsilon ) \ d \tau \right ] \sum _ {k = 0 } ^ \infty \epsilon ^ {k} \phi _ {kj} ( t), $$

$$ 1 \leq j \leq n. $$

This expansion is uniform relative to $ t \in I $ and can be differentiated any number of times with respect to $ t $ and $ \epsilon $. If $ A $ does not depend on $ \epsilon $, that is, $ A = A ( t) $, then

$$ \phi _ {0j} ( t) = \ \mathop{\rm exp} \left [ - \int\limits _ {t _ {0} } ^ { t } \left ( e _ {j} ^ {*} ( \tau ),\ \frac{de _ {j} ( \tau ) }{d \tau } \right ) \ d \tau \right ] e _ {j} ( t), $$

where $ e _ {j} ^ {*} $, $ e _ {j} $ are left and right eigenvectors of $ A ( t) $ normalized by

$$ ( e _ {j} ^ {*} ( t), e _ {j} ( t)) \equiv 1,\ \ t \in I. $$

Solutions having asymptotic behaviour of the form (3) are called WKB solutions (see WKB method). The qualitative structure of these solutions is as follows. If

$$ \mathop{\rm Re} \lambda _ {j} ( t) < 0 \ \ [ \mathop{\rm Re} \lambda _ {j} ( t) > 0] \ \ \textrm{ for } t \in I, $$

then $ x _ {j} $ is a vector function of boundary-layer type for $ t _ {0} = 0 $( $ t _ {0} = T $), that is, it is noticeably different from zero only in an $ \epsilon $- neighbourhood of $ t = 0 $( $ t = T $). If, however, $ \mathop{\rm Re} \lambda _ {j} ( t) \equiv 0 $, $ t \in I $, then $ x _ {j} $ strongly oscillates as $ \epsilon \rightarrow + 0 $ and has order $ O ( 1) $ on the whole interval $ I $.

If $ A ( t, \epsilon ) $ is a holomorphic matrix function for $ | t | \leq t _ {0} $, $ | \epsilon | \leq \epsilon _ {0} $ and condition 1) is satisfied, then (3) is valid for $ \epsilon \rightarrow + 0 $, $ 0 \leq t \leq t _ {1} $, where $ t _ {1} > 0 $ is sufficiently small. A difficult problem is the construction of asymptotics for fundamental systems of solutions in the presence of turning points on $ I $, that is, points at which $ A ( t, 0) $ has multiple eigenvalues. This problem has been completely solved only for special types of turning points (see [1]). In a neighbourhood of a turning point there is a domain of transition in which the solution is rather complicated and in the simplest case is expressed by an Airy function (cf. Airy functions).

Similar results (see [1], [17]) are valid for scalar equations of the form

$$ \epsilon ^ {n} x ^ {(} n) + \sum _ {j = 0 } ^ { {n } - 1 } \epsilon ^ {j} a _ {j} ( t, \epsilon ) x ^ {(} j) = 0, $$

where $ a _ {j} $ is a complex-valued function; the roles of the functions $ \lambda _ {j} ( t, \epsilon ) $ are played by the roots of the characteristic equation

$$ \lambda ^ {n} + \sum _ {j = 0 } ^ { {n } - 1 } \lambda ^ {j} a _ {j} ( t, \epsilon ) = 0. $$

WKB solutions also arise in non-linear systems of the form

$$ \epsilon \dot{x} = \ f ( t, x, \epsilon ),\ \ x \in \mathbf R ^ {n} . $$

The WKB asymptotic expansion (3), under the conditions of Birkhoff's theorem, is valid in an infinite interval $ 0 \leq t < \infty $( that is, (3) is asymptotic both as $ \epsilon \rightarrow 0 $ and as $ t \rightarrow \infty $) if $ A ( t, \epsilon ) $ is sufficiently well behaved as $ t \rightarrow + \infty $, for example, if it rapidly converges to a constant matrix with distinct eigenvalues (see [2]). Many questions of spectral analysis (see [3]) and mathematical physics reduce to singular problems with a small parameter.

3) Of particular interest is the investigation of non-linear systems of the form

$$ \tag{4 } \left . \begin{array}{c} \epsilon \dot{x} = f ( x, y),\ x ( 0) = x _ {0} ,\ \ x \in \mathbf R ^ {n} , \\ \dot{y} = g( x, y),\ y ( 0) = y _ {0} ,\ \ y \in \mathbf R ^ {m} , \\ \end{array} \right \} $$

where $ \epsilon > 0 $ is a small parameter. The first equation describes fast motions, the second slow motions. For example, the van der Pol equation reduces by the substitution

$$ y = \ \int\limits _ { 0 } ^ { x } ( x ^ {2} - 1) \ dx + { \frac{1} \lambda } \frac{dx }{dt } ,\ \ t _ {1} = \ { \frac{t} \lambda } ,\ \ \epsilon = \ { \frac{1}{\lambda ^ {2} } } , $$

for $ \lambda $ large, to the system

$$ \epsilon \dot{x} = \ y - { \frac{1}{3} } x ^ {3} + x,\ \ \dot{y} = - x, $$

which is of the form (4).

For $ \epsilon = 0 $ the equation of fast motion degenerates to the equation $ f ( x, y) = 0 $. In some closed bounded domain $ D $ of the variable $ y $, let this equation have an isolated stable continuous root $ x = \phi ( y) $( that is, the real parts of the eigenvalues of the Jacobi matrix $ \partial f/ \partial x $ are negative for $ x = \phi ( y) $, $ y \in D $); suppose that solutions of (4) and of the degenerate problem

$$ \tag{5 } x = \phi ( y),\ \ \dot{y} = \ g ( x, y),\ \ y ( 0) = y _ {0} , $$

exist and are unique for $ t \in I $, and let for the function $ \overline{y}\; $, obtained as the solution of (5), $ \overline{y}\; ( t) \in D $ for $ t \in I $. If $ ( x _ {0} , y _ {0} ) $ is in the domain of influence of the root $ x = \phi ( y) $, then

$$ x ( t, \epsilon ) \rightarrow \overline{x}\; ( t),\ \ 0 < t \leq T, $$

$$ y ( t, \epsilon ) \rightarrow \overline{y}\; ( t),\ 0 \leq t \leq T, $$

as $ \epsilon \rightarrow 0 $, where $ ( \overline{x}\; , \overline{y}\; ) $ is the solution of the degenerate problem (Tikhonov's theorem). Close to $ t = 0 $ the limit transition $ x ( t, \epsilon ) \rightarrow \overline{x}\; ( t) $ is non-uniform — a boundary layer occurs. For problem (4) there is the following asymptotic expansion for the solution:

$$ \tag{6 } x ( t, \epsilon ) \sim \ \sum _ {k = 0 } ^ \infty \epsilon ^ {k} x _ {k} ( t) + \sum _ {k = 0 } ^ \infty \epsilon ^ {k} \Pi _ {k} \left ( { \frac{t} \epsilon } \right ) , $$

and the asymptotic expansion for $ y ( t, \epsilon ) $ has a similar form. In (6) the first sum is the regular part and the second sum is the boundary layer. The regular part of the asymptotic expansion is calculated by standard means: series of the form (2) are substituted into (4), the right-hand sides are expanded as power series in $ \epsilon $ and the coefficients at equal powers of $ \epsilon $ are equated. For the calculation of the boundary-layer part of the asymptotic expansion one introduces a new variable $ \tau = t/ \epsilon $( the fast time) in a neighbourhood of $ t = 0 $ and applies the above procedure. There is an interval on the $ t $- axis on which both the regular (or outer) expansion and the boundary-layer (or inner) expansion are useful. The functions $ x _ {k} $, $ \Pi _ {k} $ are defined by the coincidence of these expansions (the so-called method of matching, see [4], [5]).

Similar results hold when the right-hand side of (4) depends explicitly on $ t $, for scalar equations of the form

$$ \tag{7 } \epsilon x ^ {(} n) = \ f ( t, x, \dot{x} \dots x ^ {( n - 1) } ) $$

and for boundary value problems for such systems and equations (see Differential equations with small parameter, [6], [7]).

For approximation of the solution of (4) at a break point, where stability is lost (for example, where one of the eigenvalues of $ \partial f/ \partial x $ for $ x = \phi ( y) $ is zero), series of the form (5) lose their asymptotic character. In a neighbourhood of a break point the asymptotic expansion has quite a different character (see [8]). The investigation of a neighbourhood of a break point is particularly essential for the construction of the asymptotic theory of relaxation oscillations (cf. Relaxation oscillation).

4) Problems in celestial mechanics and non-linear oscillation theory lead, in particular, to the necessity of investigating the behaviour of solutions of (1) not in a finite interval but in a large $ t $- interval of the order of $ \epsilon ^ {-} 1 $ or higher. For these problems a method of averaging is widely applied (see Krylov–Bogolyubov method of averaging; Small denominators, [9][11]).

5) Asymptotic behaviour of solutions of equations of the form (7) has been investigated, in particular, with the help of the so-called method of multiple scales (see [4], [5]); this method is a generalization of the WKB method. An example of the method has been given using the scalar equation

$$ \tag{8 } \epsilon ^ {2} x ^ {2} + f ( t, x) = 0, $$

which has a periodic solution (see [12]). The solution is sought for in the form

$$ \tag{9 } x = \phi ( T, t, \epsilon ) \sim \ \sum _ {j = 0 } ^ \infty \epsilon ^ {j} \phi _ {j} ( T, t),\ \ T = { \frac{s ( t) } \epsilon } . $$

(The functions $ T, t $ are called the scales.) If (8) is linear, then $ \phi _ {j} = e ^ {T} \psi _ {j} ( t) $ and (9) is a WKB solution. In the non-linear case the equations of the first two approximations take the form

$$ \dot{S} ^ {2} \frac{\partial ^ {2} \phi _ {0} }{\partial T ^ {2} } + f ( t, \phi _ {0} ) = 0, $$

$$ \dot{S} ^ {2} \frac{\partial ^ {2} \phi _ {1} }{ \partial T ^ {2} } + \frac{\partial f ( t, \phi _ {0} ) }{\partial x } \phi _ {1} = - 2 \dot{S} \frac{\partial ^ {2} \phi _ {0} }{\partial t \partial T } - \dot{S} \frac{\partial \phi _ {0} }{\partial T } , $$

where the first equation contains two unknown functions $ S $ and $ \phi _ {0} $. Let this equation have a solution $ \phi _ {0} = \phi _ {0} ( t, T) $ periodic in $ t $. Then the missing equation, from which $ S $ is to be determined, is found from the periodicity in $ t $ of $ \phi _ {1} $ and has the form

$$ \dot{S} \oint \left ( \frac{\partial \phi _ {0} ( t, T) }{\partial T } \right ) ^ {2} dT = \ E \equiv \textrm{ const } , $$

where the integral is taken over a period of $ \phi $.

References

[1] W. Wasov, "Asymptotic expansions for ordinary differential equations" , Interscience (1965)
[2] M.V. Fedoryuk, "Asymptotic methods in the theory of ordinary linear differential equations" Math. USSR Sb. , 8 : 4 (1969) pp. 451–491 Mat. Sb. , 79 : 4 (1969) pp. 477–516 Zbl 0215.44801
[3] M.A. Naimark, "Linear differential operators" , 1–2 , Harrap (1968) (Translated from Russian) MR0262880 Zbl 0227.34020
[4] J.D. Cole, "Perturbation methods in applied mathematics" , Blaisdell (1968) MR0246537 Zbl 0162.12602
[5] A.H. Nayfeh, "Perturbation methods" , Wiley (1973) MR0404788 Zbl 0265.35002
[6] A.B. Vasil'eva, V.F. Butuzov, "Asymptotic expansions of solutions of singularly perturbed equations" , Moscow (1973) (In Russian)
[7] A.B. Vasil'eva, V.F. Butuzov, "Singularly perturbed equations in critical cases" , Moscow (1978) (In Russian) Zbl 1210.34017
[8] E.F. Mishchenko, N.Kh. Rozov, "Differential equations with small parameters and relaxation oscillations" , Plenum (1980) (Translated from Russian) MR0750298 Zbl 0482.34004
[9] N.N. Bogolyubov, Yu.A. Mitropol'skii, "Asymptotic methods in the theory of non-linear oscillations" , Hindushtan Publ. Comp. , Delhi (1961) (Translated from Russian) MR0100379 Zbl 0151.12201
[10] V.M. Volosov, B.I. Morgunov, "Averaging methods in the theory of non-linear oscillatory systems" , Moscow (1971) (In Russian)
[11] V.I. Arnol'd, "Small denominators and problems of stability of motion in classical and celestial mechanics" Russian Math. Surveys , 18 : 6 (1963) pp. 86–191 Uspekhi Mat. Nauk , 18 : 6 (1963) pp. 91–192 Zbl 0135.42701
[12] G.E. Kuzmak, "Asymptotic solutions of nonlinear second order differential equations with variable coefficients" J. Appl. Math. Mech. , 23 (1959) pp. 730–744 Prikl. Mat. i Mekh. , 23 : 3 (1959) pp. 515–520 MR0109924 Zbl 0089.29803
[13] A.A. Andronov, A.A. Vitt, A.E. Khaikin, "The theory of oscillators" , Dover, reprint (1987) (Translated from Russian) MR925417
[14] G.E. Giacaglia, "Perturbation methods in non-linear systems" , Springer (1972) MR0478875 Zbl 0282.34001
[15] N.N. Moiseev, "Asymptotic methods of non-linear mechanics" , Moscow (1969) (In Russian)
[16] V.F. Butuzov, A.B. Vasil'eva, M.V. Fedoryuk, "Asymptotic methods in the theory of ordinary differential equations" Progress in Math. , 8 (1970) pp. 1–82 Itogi Nauk. Mat. Anal. 1967 (1969) pp. 5–73 MR0283329 Zbl 0246.34055
[17] W. Wasow, "Linear turning point theory" , Springer (1985) MR0771669 Zbl 0558.34049

N.Kh. RozovM.V. Fedoryuk

2. The method of the small parameter for partial differential equations.

As for ordinary differential equations, solutions of partial differential equations can regularly or singularly depend on a small parameter $ \epsilon $( it is assumed that $ \epsilon > 0 $). Roughly speaking, regular dependence is observed when the leading terms of the differential operator do not depend on $ \epsilon $, and the minor terms are smooth functions of $ \epsilon $ for small $ \epsilon $. The solution is then also a smooth function of $ \epsilon $. But if any of the leading terms vanish as $ \epsilon \rightarrow 0 $, then the solution, as a rule, depends singularly on $ \epsilon $. In this case one often speaks of partial differential equations "with a small parameter in front of the leading derivatives" . Such a classification is somewhat arbitrary, since the choice of leading terms is not always obvious; also, the parameter itself may also occur in the boundary conditions. In addition, singularities may arise in unbounded domains, even when the small parameter only occurs in the minor derivatives (at infinity they play in a sense an equal, or even more major, role as the leading terms).

For example, consider a second-order elliptic partial differential equation in a bounded domain $ G \subset \mathbf R ^ {n} $. The solution of the problem

$$ \Delta u + \sum _ {j = 1 } ^ { n } a _ {j} ( x, \epsilon ) \frac{\partial u }{\partial x _ {j} } + b ( x, \epsilon ) u = \ f ( x, \epsilon ), $$

$$ u \mid _ {dG} = \phi ( x, \epsilon ) , $$

is a smooth function of $ \epsilon $, for small $ \epsilon $, if the boundary is smooth, if $ a _ {j} $, $ b $, $ f $, $ \phi $ are smooth functions of $ x $ and $ \epsilon $, and if the limit boundary value problem

$$ \Delta u + \sum _ {j = 1 } ^ { n } a _ {j} ( x , 0) \frac{\partial u }{\partial x _ {j} } + b ( x, 0) u = \ f ( x, 0), $$

$$ u \mid _ {\partial G } = \phi ( x, 0) , $$

is uniquely solvable for any smooth functions $ f ( x, 0) $, $ \phi ( x, 0) $. The solution can be expanded in an asymptotic series in powers of $ \epsilon $:

$$ \tag{1 } u ( x, \epsilon ) \sim \ \sum _ {k = 0 } ^ \infty \epsilon ^ {k} u _ {k} ( x), $$

whose coefficients $ u _ {k} ( x) $ are solutions of the same type of boundary value problem and are easily calculated by perturbation theory.

Quite a different situation holds, for example, for the boundary value problem

$$ \tag{2 } \left . \begin{array}{c} L _ \epsilon u \equiv \ \epsilon ^ {2} \Delta u + \sum _ {j = 1 } ^ { n } a _ {j} ( x) \frac{\partial u }{\partial x _ {j} } + b ( x) u = f ( x), \\ u \mid _ {\partial G } = \phi ( x), \\ \end{array} \right \} $$

since for $ \epsilon = 0 $ the order of the equation is less. The limit problem has the form

$$ \tag{3 } \left . \begin{array}{c} L _ {0} u \equiv \ \sum _ {j = 1 } ^ { n } a _ {j} ( x) \frac{\partial u }{\partial x _ {j} } + b ( x) u = f ( x), \\ u \mid _ {\partial G } = \phi ( x), \\ \end{array} \right \} $$

and, in general, is unsolvable. Let $ n = 2 $, let the characteristics of the limit equation have the form depicted in the figure and let their orientation be induced by the vector field $ \{ a _ {j} ( x) \} $.

Figure: s085820a

If the solution of the limit equation is known at some point, then it is known along all characteristics passing through the point; therefore the boundary value problem (3) is unsolvable for $ \phi ( x) $ arbitrary. As $ \epsilon \rightarrow 0 $, the solution of (2) converges to a solution $ u _ {0} ( x) $ of the limit equation $ L _ {0} u _ {0} = f $ which is equal to $ \phi ( x) $ on the segments $ AB $ and $ CD $. On the remainder of the boundary the boundary conditions are lost. In a neighbourhood of each of the segments $ AB $ and $ BC $, having the typical width $ \epsilon ^ {2} $ and called a boundary layer, the solution of (2) is close to the sum

$$ u _ {0} ( x) + v _ {0} ( \rho \epsilon ^ {-} 2 , x ^ \prime ). $$

Here $ x ^ \prime $ is the coordinate along the boundary $ AD $( $ BC $), $ \rho $ is the distance from the boundary along the normal, and $ \rho \epsilon ^ {-} 2 $ is the so-called inner variable. The solution of (2) expands as an asymptotic series of the form (1) everywhere except on the boundary layer and some special characteristic (in the figure, this is $ CC ^ \prime $). The partial sums of the asymptotic series uniformly approximate the solution of (2) in the domain obtained from $ G $ by removing fixed neighbourhoods of the lines $ AD $, $ BC $ and $ CC ^ \prime $. In the boundary layer, outside a neighbourhood of the points $ A $, $ B $, $ C $, $ D $, $ C ^ \prime $, to the asymptotic series (1) one adds the asymptotic series

$$ \sum _ {k = 0 } ^ \infty \epsilon ^ {2k} v _ {2k} ( \rho \epsilon ^ {-} 2 , x ^ \prime ). $$

The functions $ v _ {2k} ( \xi , x ^ \prime ) $ decrease exponentially as $ \xi \rightarrow \infty $. The first asymptotic series is usually called the outer asymptotic series and the second the inner asymptotic series, and the functions $ v _ {2k} ( \xi , x ^ \prime ) $ are called boundary-layer functions. This terminology, as even the problem itself, comes from the problem of fluid flow around bodies with small viscosity (see Hydrodynamics, mathematical problems in, and also [1][4]). This method is called the method of the boundary layer and is essentially the same as the method for ordinary differential equations.

In neighbourhoods of the points $ A $, $ B $, $ C $, $ D $, at which the characteristics of $ L _ {0} $ touch the boundary, and close to $ CC ^ \prime $, the asymptotic behaviour of the solution is more complicated. The complications arise when the boundary is not everywhere smooth (has corners and, for $ n > 2 $, edges). In some simple cases it is possible to construct asymptotic formulas by the addition of supplementary boundary-layer functions depending on more variables, but, as before, tending exponentially to zero at infinity. However, as a rule the picture is more complicated: both the coefficients $ u _ {k} ( x) $ of the outer expansion and the coefficients $ v _ {k} ( \xi , x ^ \prime ) $ of the inner expansion have essential singularities at the singular points ( $ A $, $ B $, $ C $, $ D $ in the figure). The asymptotic expansion of the solution, uniformly in the closed domain $ \overline{G}\; $, can be constructed by the method of multiple scales (by the method of matched asymptotic series, [5]). Some problems for partial differential equations may be investigated by another variant of the method of multiple scales (the method of ascent): the solution is considered as a function of the basic independent variables and auxiliary "fast" variables. As a result the dimension of the original problem is increased but the dependence on the parameter is simplified (see [6]).

If the field of characteristics of the limit operator $ L _ {0} $ has stationary points, then the problem significantly complicates. For example, if $ f ( x) \equiv 0 $, $ b ( x) \equiv 0 $, and if all the characteristics are directed into the domain, then the solution of (2) tends to a constant as $ \epsilon \rightarrow 0 $. Finding this constant and constructing an asymptotic series of the solution is a difficult, only partly solved problem (see [7]). Equation (2) describes random perturbations of the dynamical system $ \dot{x} = a ( x) $. Problems in this area were also at the origin of the development of the method of the small parameter in the theory of partial differential equations (see [8]).

If $ a _ {j} ( x) \equiv 0 $ in (2) and $ b ( x) < 0 $, then the asymptotic series is easily found; far from the boundary the series has the form (1) and in the boundary layer, close to the boundary, the asymptotic series

$$ \sum _ {k = 0 } ^ \infty \epsilon ^ {k} v _ {k} ( \xi , x ^ \prime ) $$

is added, where now $ \xi = \rho \epsilon ^ {-} 1 $.

The problem is really complicated if $ a _ {j} ( x) \equiv 0 $, $ b ( x) > 0 $. In this case the solution strongly oscillates; the asymptotic methods are the WKB method; semi-classical approximation; the parabolic-equation method, etc.

There is a class of problems in which the boundary of the domain degenerates as $ \epsilon \rightarrow 0 $. For the sake of being specific, consider the problem

$$ \tag{4 } ( \Delta + k ^ {2} ) u ( x) = 0,\ \ x \in G _ \epsilon ,\ \ u \mid _ {\partial G _ \epsilon } = \phi ( x), $$

where $ G _ \epsilon $ is the exterior of a bounded domain $ D _ \epsilon $ in $ \mathbf R ^ {n} $ and $ k > 0 $; at infinity the radiation conditions are posed. For example, let $ D _ \epsilon = \epsilon D $, $ D $ being a fixed domain containing $ x = 0 $; then $ D _ \epsilon $ contracts to the point $ x = 0 $ and (4) has no limit. The quantity $ \lambda = 2 \pi /k $ has the sense of wavelength: here $ \lambda \gg d _ \epsilon $, where $ d _ \epsilon $ is the diameter of $ D _ \epsilon $, and one speaks of dispersion of wavelengths on the obstacle $ D _ \epsilon $( or hydrodynamic approximation, or Rayleigh approximation). There are two overlapping zones: the nearer containing $ \partial G _ \epsilon $, with size tending to zero as $ \epsilon \rightarrow 0 $, and the farther, the exterior of the domain, contracting to the point $ x = 0 $ as $ \epsilon \rightarrow 0 $. The asymptotic series of the solution has different forms in these zones. The first boundary value problem for $ n = 2 $ turns out to be the most difficult; the inner asymptotic expansion has the form

$$ u ( x, \epsilon ) \sim \ \sum _ {k = 0 } ^ \infty \sum _ {l = 0 } ^ { N } \epsilon ^ {k} \mu ^ {l} v _ {kl} ( \xi ), $$

where $ \xi = \epsilon ^ {-} 1 x $, $ \mu = ( \mathop{\rm ln} \epsilon + \alpha ) ^ {-} 1 $ and $ \alpha $ is a constant (see [9]). The long-wave approximation has been studied mainly for the Helmholtz equation and for the Maxwell system (see [10], [11]).

Another variant arises when $ D _ \epsilon $ contracts to an interval $ L $ as $ \epsilon \rightarrow 0 $; in this case there is a limit problem for $ n = 2 $ but not for $ n > 2 $. Problems of this type (including those for the Laplace equation, for linear hyperbolic equations and for non-linear partial differential equations) arise in hydrodynamics and aerodynamics, in the theory of diffraction of waves (flow around a thin body of a fluid or a gas). The problem (4) has been investigated for $ n = 2 $( see [12]); for $ n = 3 $ it has been investigated if $ k = 0 $ and $ D _ \epsilon $ is a solid of revolution (see [13]).

Partial differential equations containing a small parameter arise naturally in the study of non-linear oscillations when the perturbation has order $ \epsilon $ but the solution is studied over a large time interval of order $ \epsilon ^ {-} 1 $. If a continuous medium is considered instead of a system of particles, then partial differential equations arise to which generalizations of the averaging method apply (see [14]).

References

[1] H. Schlichting, "Boundary layer theory" , McGraw-Hill (1955) (Translated from German) MR0076530 Zbl 0065.18901
[2] M. van Dyke, "Perturbation methods in fluid mechanics" , Parabolic Press (1975) Zbl 0329.76002
[3] M.I. Vishik, L.A. Lyusternik, "Regular degeneracy and boundary layer for linear differential equations with a small parameter" Uspekhi Mat. Nauk , 12 : 5 (1957) pp. 3–122 (In Russian)
[4] V.A. Trenogin, "The development and applications of the asymptotic method of Lyusternik and Vishik" Russian Math. Surveys , 25 : 4 (1970) pp. 119–156 Uspekhi Mat. Nauk , 25 : 4 (1970) pp. 123–156 Zbl 0222.35028
[5] A.H. Nayfeh, "Perturbation methods" , Wiley (1973) MR0404788 Zbl 0265.35002
[6] S.A. Lomov, "The method of perturbations for singular problems" Math. USSR-Izv. , 6 : 3 (1972) pp. 631–648 Izv. Akad. Nauk SSSR Ser. Mat. , 36 : 3 (1972) pp. 635–651 Zbl 0283.34055
[7] L.D. Venttsel', "Random perturbations of dynamical systems" , Springer (1984) (Translated from Russian)
[8] L.S. Pontryagin, A.A. Andronov, A.A. Vitt, Zh. Eksper. i Teoret. Fiz. , 3 : 3 (1933) pp. 165–180
[9] A.M. Il'in, "A boundary value problem for the second order elliptic equation in a domain with a narrow slit. 2. Domain with a small cavity" Math. USSR Sb. , 32 : 2 (1977) pp. 227–244 Mat. Sb. , 103 : 2 (1977) pp. 265–284 Zbl 0396.35033
[10] A.W. Mane, "Theorie der Beugung" S. Flügge (ed.) , Handbuch der Physik , 25/1 , Springer (1961) pp. 218–573
[11] P.M. Morse, H. Feshbach, "Methods of theoretical physics" , 2 , McGraw-Hill (1953) MR0059774 Zbl 0051.40603
[12] A.M. Il'in, "A boundary value problem for the elliptic equation of second order in a domain with a narrow slit. 1. The two-dimensional case" Math. USSR Sb. , 28 : 4 (1976) pp. 459–480 Mat. Sb. , 99 : 4 (1976) pp. 514–537 Zbl 0381.35028
[13] J.D. Cole, "Perturbation methods in applied mathematics" , Blaisdell (1968) MR0246537 Zbl 0162.12602
[14] Yu.A. Mitropol'skii, B.I. Moseenkov, "Asymptotic solutions of partial differential equations" , Kiev (1976) (In Russian)

A.M. Il'inM.V. Fedoryuk

Comments

References [a1][a4] are selected from the large Western literature concerning the general subject of singular perturbations, i.e. (a particular kind of) singular small parameter equations; cf. also (the editorial comments headed "singular perturbations" in) Perturbation theory. Another interesting specific topic in the same area is the Hamilton–Jacobi equation, as a limit of parabolic equations with a small parameter [a5].

References

[a1] R.E. O'Malley, "Introduction to singular perturbations" , Acad. Press (1974) Zbl 0287.34062
[a2] J. Kevorkian, J.D. Cole, "Perturbation methods in applied mathematics" , Springer (1981) MR0608029 Zbl 0456.34001
[a3] J.D. Murray, "Asymptotic analysis" , Springer (1984) MR0740864 Zbl 0529.41001
[a4] P.A. Lagerstrom, "Matched asymptotic expansions" , Springer (1988) MR0958913 Zbl 0666.34064
[a5] D.G. Aronson, J.L. Vazquez, "The porous medium equation as a finite-speed approximation to a Hamiltonian–Jacobi equation" Ann. Inst. H. Poincaré Anal. Non Linéaire , 4 (1987) pp. 203–230 MR898047
How to Cite This Entry:
Small parameter, method of the. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Small_parameter,_method_of_the&oldid=24565
This article was adapted from an original article by N.Kh. Rozov, M.V. Fedoryuk, A.M. Il'in (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article