# Small parameter, method of the

in the theory of differential equations

A method for constructing approximate solutions of differential equations and systems depending on a parameter.

## 1. The method of the small parameter for ordinary differential equations.

Ordinary differential equations arising from applied problems usually contain one or more parameters. Parameters may also occur in the initial data or boundary conditions. Since an exact solution of a differential equation can only be found in very special isolated cases, the problem of constructing approximate solutions arises. A typical scenario is: the equation and the initial (boundary) conditions contain a parameter $\lambda$ and the solution is known (or may be assumed known) for $\lambda = \lambda _ {0}$; the requirement is to construct an approximate solution for values $\lambda$ close to $\lambda _ {0}$, that is, to construct an asymptotic solution as $\epsilon \rightarrow 0$, where $\epsilon = \lambda - \lambda _ {0}$ is a "small" parameter. The method of the small parameter arises, e.g., in the three-body problem of celestial mechanics, which goes back to J. d'Alembert, and was intensively developed at the end of the 19th century.

The following notations are used below: $t$ is an independent variable, $\epsilon > 0$ is a small parameter, $I$ is an interval $0 \leq t \leq T$, and the sign $\sim$ denotes asymptotic equality. All vector and matrix functions which appear in equations and boundary conditions are assumed to be smooth (of class $C ^ \infty$) with respect to all variables in their domain (with respect to $\epsilon$ for $0 \leq \epsilon \leq \epsilon _ {0}$ or $| \epsilon | \leq \epsilon _ {0}$).

1) The Cauchy problem for an $n$- th order system:

$$\tag{1 } \dot{x} = \ f ( t, x, \epsilon ),\ \ x ( 0) = \ x _ {0} ( \epsilon ).$$

Let the solution $\phi _ {0} ( t)$ of the limit problem (that is, (1) with $\epsilon = 0$) exist and be unique for $t \in I$. Then there is an asymptotic expansion for the solution $x ( t, \epsilon )$ of (1) as $\epsilon \rightarrow 0$,

$$\tag{2 } x ( t, \epsilon ) \sim \ \phi _ {0} ( t) + \sum _ {j = 1 } ^ \infty \epsilon ^ {j} \phi _ {j} ( t),$$

which holds uniformly with respect to $t \in I$. This follows from the theorem on the smooth dependence on the parameter of the solution of a system of ordinary differential equations. If the vector functions $f$ and $x _ {0}$ are holomorphic for $| \epsilon | \leq \epsilon _ {0}$, $x = \phi _ {0} ( t)$, $t \in I$, then the series in (2) converges to a solution $x ( t, \epsilon )$ for sufficiently small $| \epsilon |$ uniformly relative to $t \in I$( Poincaré's theorem). Similar results hold for boundary value problems for systems of the form (1), if the solution of the corresponding limit problem exists and is unique.

One distinguishes two forms of dependence of equations (or systems) on a small parameter — regular and singular. A system in normal form depends regularly on $\epsilon$ if all its right-hand sides are smooth functions of $\epsilon$ for small $\epsilon \geq 0$; otherwise the system depends singularly on $\epsilon$. When the system depends regularly on $\epsilon$, the solution of the problem with a parameter, as a rule, converges uniformly on a finite $t$- interval as $\epsilon \rightarrow 0$ to a solution of the limit problem.

2) In the linear theory one considers $n$- th order systems which depend singularly on $\epsilon$:

$$\epsilon \dot{x} = \ A ( t, \epsilon ) x + f ( t, \epsilon ),$$

where the entries of the $( n \times n)$- matrix $A$ and the components of the vector $f$ are complex-valued functions. The central problem in the linear theory is the construction of a fundamental system of solutions of the homogeneous system (that is, for $f \equiv 0$), the asymptotic behaviour of which as $\epsilon \rightarrow 0$ is known throughout the interval $I$.

The basic result of the linear theory is the following theorem of Birkhoff. Let: 1) the eigenvalues $\lambda _ {j} ( t, 0)$, $1 \leq j \leq n$, of $A ( t, 0)$ be distinct for $t \in I$; and 2) the quantities

$$\mathop{\rm Re} ( \lambda _ {j} ( t, 0) - \lambda _ {k} ( t, 0)),\ \ 1 \leq j, k \leq n,\ \ j \neq k,$$

not change sign. Then there is a fundamental system of solutions $x _ {1} ( t, \epsilon ) \dots x _ {n} ( t, \epsilon )$ of the homogeneous system

$$\epsilon \dot{x} = \ A ( t, \epsilon ) x$$

for which there is the following asymptotic expansion as $\epsilon \rightarrow 0$:

$$\tag{3 } x _ {j} ( t, \epsilon ) \sim \ \mathop{\rm exp} \left [ \epsilon ^ {-} 1 \int\limits _ {t _ {0} } ^ { t } \lambda _ {j} ( \tau , \epsilon ) \ d \tau \right ] \sum _ {k = 0 } ^ \infty \epsilon ^ {k} \phi _ {kj} ( t),$$

$$1 \leq j \leq n.$$

This expansion is uniform relative to $t \in I$ and can be differentiated any number of times with respect to $t$ and $\epsilon$. If $A$ does not depend on $\epsilon$, that is, $A = A ( t)$, then

$$\phi _ {0j} ( t) = \ \mathop{\rm exp} \left [ - \int\limits _ {t _ {0} } ^ { t } \left ( e _ {j} ^ {*} ( \tau ),\ \frac{de _ {j} ( \tau ) }{d \tau } \right ) \ d \tau \right ] e _ {j} ( t),$$

where $e _ {j} ^ {*}$, $e _ {j}$ are left and right eigenvectors of $A ( t)$ normalized by

$$( e _ {j} ^ {*} ( t), e _ {j} ( t)) \equiv 1,\ \ t \in I.$$

Solutions having asymptotic behaviour of the form (3) are called WKB solutions (see WKB method). The qualitative structure of these solutions is as follows. If

$$\mathop{\rm Re} \lambda _ {j} ( t) < 0 \ \ [ \mathop{\rm Re} \lambda _ {j} ( t) > 0] \ \ \textrm{ for } t \in I,$$

then $x _ {j}$ is a vector function of boundary-layer type for $t _ {0} = 0$( $t _ {0} = T$), that is, it is noticeably different from zero only in an $\epsilon$- neighbourhood of $t = 0$( $t = T$). If, however, $\mathop{\rm Re} \lambda _ {j} ( t) \equiv 0$, $t \in I$, then $x _ {j}$ strongly oscillates as $\epsilon \rightarrow + 0$ and has order $O ( 1)$ on the whole interval $I$.

If $A ( t, \epsilon )$ is a holomorphic matrix function for $| t | \leq t _ {0}$, $| \epsilon | \leq \epsilon _ {0}$ and condition 1) is satisfied, then (3) is valid for $\epsilon \rightarrow + 0$, $0 \leq t \leq t _ {1}$, where $t _ {1} > 0$ is sufficiently small. A difficult problem is the construction of asymptotics for fundamental systems of solutions in the presence of turning points on $I$, that is, points at which $A ( t, 0)$ has multiple eigenvalues. This problem has been completely solved only for special types of turning points (see ). In a neighbourhood of a turning point there is a domain of transition in which the solution is rather complicated and in the simplest case is expressed by an Airy function (cf. Airy functions).

Similar results (see , ) are valid for scalar equations of the form

$$\epsilon ^ {n} x ^ {(} n) + \sum _ {j = 0 } ^ { {n } - 1 } \epsilon ^ {j} a _ {j} ( t, \epsilon ) x ^ {(} j) = 0,$$

where $a _ {j}$ is a complex-valued function; the roles of the functions $\lambda _ {j} ( t, \epsilon )$ are played by the roots of the characteristic equation

$$\lambda ^ {n} + \sum _ {j = 0 } ^ { {n } - 1 } \lambda ^ {j} a _ {j} ( t, \epsilon ) = 0.$$

WKB solutions also arise in non-linear systems of the form

$$\epsilon \dot{x} = \ f ( t, x, \epsilon ),\ \ x \in \mathbf R ^ {n} .$$

The WKB asymptotic expansion (3), under the conditions of Birkhoff's theorem, is valid in an infinite interval $0 \leq t < \infty$( that is, (3) is asymptotic both as $\epsilon \rightarrow 0$ and as $t \rightarrow \infty$) if $A ( t, \epsilon )$ is sufficiently well behaved as $t \rightarrow + \infty$, for example, if it rapidly converges to a constant matrix with distinct eigenvalues (see ). Many questions of spectral analysis (see ) and mathematical physics reduce to singular problems with a small parameter.

3) Of particular interest is the investigation of non-linear systems of the form

$$\tag{4 } \left . \begin{array}{c} \epsilon \dot{x} = f ( x, y),\ x ( 0) = x _ {0} ,\ \ x \in \mathbf R ^ {n} , \\ \dot{y} = g( x, y),\ y ( 0) = y _ {0} ,\ \ y \in \mathbf R ^ {m} , \\ \end{array} \right \}$$

where $\epsilon > 0$ is a small parameter. The first equation describes fast motions, the second slow motions. For example, the van der Pol equation reduces by the substitution

$$y = \ \int\limits _ { 0 } ^ { x } ( x ^ {2} - 1) \ dx + { \frac{1} \lambda } \frac{dx }{dt } ,\ \ t _ {1} = \ { \frac{t} \lambda } ,\ \ \epsilon = \ { \frac{1}{\lambda ^ {2} } } ,$$

for $\lambda$ large, to the system

$$\epsilon \dot{x} = \ y - { \frac{1}{3} } x ^ {3} + x,\ \ \dot{y} = - x,$$

which is of the form (4).

For $\epsilon = 0$ the equation of fast motion degenerates to the equation $f ( x, y) = 0$. In some closed bounded domain $D$ of the variable $y$, let this equation have an isolated stable continuous root $x = \phi ( y)$( that is, the real parts of the eigenvalues of the Jacobi matrix $\partial f/ \partial x$ are negative for $x = \phi ( y)$, $y \in D$); suppose that solutions of (4) and of the degenerate problem

$$\tag{5 } x = \phi ( y),\ \ \dot{y} = \ g ( x, y),\ \ y ( 0) = y _ {0} ,$$

exist and are unique for $t \in I$, and let for the function $\overline{y}\;$, obtained as the solution of (5), $\overline{y}\; ( t) \in D$ for $t \in I$. If $( x _ {0} , y _ {0} )$ is in the domain of influence of the root $x = \phi ( y)$, then

$$x ( t, \epsilon ) \rightarrow \overline{x}\; ( t),\ \ 0 < t \leq T,$$

$$y ( t, \epsilon ) \rightarrow \overline{y}\; ( t),\ 0 \leq t \leq T,$$

as $\epsilon \rightarrow 0$, where $( \overline{x}\; , \overline{y}\; )$ is the solution of the degenerate problem (Tikhonov's theorem). Close to $t = 0$ the limit transition $x ( t, \epsilon ) \rightarrow \overline{x}\; ( t)$ is non-uniform — a boundary layer occurs. For problem (4) there is the following asymptotic expansion for the solution:

$$\tag{6 } x ( t, \epsilon ) \sim \ \sum _ {k = 0 } ^ \infty \epsilon ^ {k} x _ {k} ( t) + \sum _ {k = 0 } ^ \infty \epsilon ^ {k} \Pi _ {k} \left ( { \frac{t} \epsilon } \right ) ,$$

and the asymptotic expansion for $y ( t, \epsilon )$ has a similar form. In (6) the first sum is the regular part and the second sum is the boundary layer. The regular part of the asymptotic expansion is calculated by standard means: series of the form (2) are substituted into (4), the right-hand sides are expanded as power series in $\epsilon$ and the coefficients at equal powers of $\epsilon$ are equated. For the calculation of the boundary-layer part of the asymptotic expansion one introduces a new variable $\tau = t/ \epsilon$( the fast time) in a neighbourhood of $t = 0$ and applies the above procedure. There is an interval on the $t$- axis on which both the regular (or outer) expansion and the boundary-layer (or inner) expansion are useful. The functions $x _ {k}$, $\Pi _ {k}$ are defined by the coincidence of these expansions (the so-called method of matching, see , ).

Similar results hold when the right-hand side of (4) depends explicitly on $t$, for scalar equations of the form

$$\tag{7 } \epsilon x ^ {(} n) = \ f ( t, x, \dot{x} \dots x ^ {( n - 1) } )$$

and for boundary value problems for such systems and equations (see Differential equations with small parameter, , ).

For approximation of the solution of (4) at a break point, where stability is lost (for example, where one of the eigenvalues of $\partial f/ \partial x$ for $x = \phi ( y)$ is zero), series of the form (5) lose their asymptotic character. In a neighbourhood of a break point the asymptotic expansion has quite a different character (see ). The investigation of a neighbourhood of a break point is particularly essential for the construction of the asymptotic theory of relaxation oscillations (cf. Relaxation oscillation).

4) Problems in celestial mechanics and non-linear oscillation theory lead, in particular, to the necessity of investigating the behaviour of solutions of (1) not in a finite interval but in a large $t$- interval of the order of $\epsilon ^ {-} 1$ or higher. For these problems a method of averaging is widely applied (see Krylov–Bogolyubov method of averaging; Small denominators, ).

5) Asymptotic behaviour of solutions of equations of the form (7) has been investigated, in particular, with the help of the so-called method of multiple scales (see , ); this method is a generalization of the WKB method. An example of the method has been given using the scalar equation

$$\tag{8 } \epsilon ^ {2} x ^ {2} + f ( t, x) = 0,$$

which has a periodic solution (see ). The solution is sought for in the form

$$\tag{9 } x = \phi ( T, t, \epsilon ) \sim \ \sum _ {j = 0 } ^ \infty \epsilon ^ {j} \phi _ {j} ( T, t),\ \ T = { \frac{s ( t) } \epsilon } .$$

(The functions $T, t$ are called the scales.) If (8) is linear, then $\phi _ {j} = e ^ {T} \psi _ {j} ( t)$ and (9) is a WKB solution. In the non-linear case the equations of the first two approximations take the form

$$\dot{S} ^ {2} \frac{\partial ^ {2} \phi _ {0} }{\partial T ^ {2} } + f ( t, \phi _ {0} ) = 0,$$

$$\dot{S} ^ {2} \frac{\partial ^ {2} \phi _ {1} }{ \partial T ^ {2} } + \frac{\partial f ( t, \phi _ {0} ) }{\partial x } \phi _ {1} = - 2 \dot{S} \frac{\partial ^ {2} \phi _ {0} }{\partial t \partial T } - \dot{S} \frac{\partial \phi _ {0} }{\partial T } ,$$

where the first equation contains two unknown functions $S$ and $\phi _ {0}$. Let this equation have a solution $\phi _ {0} = \phi _ {0} ( t, T)$ periodic in $t$. Then the missing equation, from which $S$ is to be determined, is found from the periodicity in $t$ of $\phi _ {1}$ and has the form

$$\dot{S} \oint \left ( \frac{\partial \phi _ {0} ( t, T) }{\partial T } \right ) ^ {2} dT = \ E \equiv \textrm{ const } ,$$

where the integral is taken over a period of $\phi$.

How to Cite This Entry:
Small parameter, method of the. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Small_parameter,_method_of_the&oldid=49590
This article was adapted from an original article by N.Kh. Rozov, M.V. Fedoryuk, A.M. Il'in (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article