Variational numerical methods
(for differential equations)
Among the principles that appear in physics there are such which are not expressed in terms of differentials but in variational form. These principles are called variational principles and they describe the conditions under which certain values attain extreme (i.e. maximum or minimum) values. Examples of these principles are Hamilton's principle, the principle of least action in classical mechanics (cf. also Principle of least reaction) and the Fermat principle in geometric optics; other examples can be found in mathematical physics, structural mechanics, fluid dynamics, heat transfer, etc.
The calculus of variations (cf. Variational calculus) is concerned with the problem of extreme values of functionals, which are real- or complex-valued functions defined on a subset of a function space. This subset consists of functions defined on a certain domain. An element of such a subset is called an admissible function (or argument function). General attention was first drawn to problems of this type in 1696 by Johann Bernoulli's statement of the brachistochrone problem (cf. Brachistochrone).
The fundamental criterion in the calculus of variations is as follows: A necessary and sufficient condition that the integral
$$ J( v) = \int\limits _ { x _ {0} } ^ { {x _ 1 } } F( x, v, v _ {x} ) dx, $$
with
$$ v _ {x} = \frac{dv }{dx } , $$
be stationary when $ v = u $ is that $ v = u $ be an admissible function satisfying the Euler equation
$$ \frac{\partial F }{\partial v } - \frac{d }{dx } \left ( \frac{\partial F }{\partial v _ {x} } \right ) = 0. $$
Its solutions are called extremals and to solve the problem one has, among all the extremals, to find one which satisfies prescribed boundary conditions.
The problem of finding the extreme values of an integral can be extended to the case of several independent variables and multiple integrals of the type
$$ J( v) = \int\limits F( x _ {1} \dots x _ {n} , v, v _ {x _ {1} } \dots v _ {x _ {n} } ) d \Omega , $$
with
$$ v _ {x _ {i} } = \frac{\partial v }{\partial x _ {i} } ,\ \ d \Omega = dx _ {1} \dots dx _ {n} . $$
The calculus of variations discusses the correspondence between the functional $ J( v) $ to be extremized and the Euler equation for the solution $ u $. A solution of an extremal problem can thus be studied by looking at the corresponding Euler equation. Conversely, when a differential equation is given, a functional whose Euler equation is the given differential equation can sometimes be constructed. In this case the extremal problem and the Euler equation are in some sense equivalent. To solve Euler's equation, boundary conditions (for instance, of Dirichlet- or Neumann-type) have to be prescribed. Those boundary conditions which are imposed on the admissible functions are called essential (e.g. Dirichlet-type), those which are automatically satisfied by the solution of the extremal problem and which are not explicitly imposed on the set of admissible functions are called natural (e.g. Neumann-type).
As an example, let $ \Omega $ be an open bounded domain in the $ n $- dimensional space $ \mathbf R ^ {n} $ and let $ f \in L _ {2} ( \Omega ) $ be a real-valued function. Consider the problem of minimizing the functional
$$ J( v) = \frac{1 }{2} \int\limits _ \Omega | \mathop{\rm grad} v | ^ {2} d \Omega - \int\limits _ \Omega fv d \Omega . $$
The set of admissible functions is the Hilbert space obtained by completing the space of infinitely-differentiable functions with compact support in $ \Omega $ with respect to the Sobolev norm
$$ \| v \| _ {1} = \left \{ \int\limits _ \Omega | \mathop{\rm grad} v | ^ {2} d \Omega + \int\limits _ \Omega v ^ {2} d \Omega \right \} ^ {1/2 } . $$
It can be shown, using the Riesz representation theorem (cf. Riesz theorem), that the extremal problem is equivalent (in the generalized, distributional sense) to the following Euler equation for $ u $( i.e. Poisson equation with Dirichlet boundary conditions):
$$ - \mathop{\rm div} \mathop{\rm grad} u = f \ \mathop{\rm in} \Omega , $$
$$ u = 0 \ \mathop{\rm on} \partial \Omega = \textrm{ boundary } \textrm{ of } \Omega . $$
There are some advantages in solving the extremal formulation (if it exists) instead of the differential equation, because some boundary conditions are automatically fulfilled and due to lower-order differentiation requirements.
Consider the problem of minimizing the functional $ J $ over a function space $ V $, and let $ \phi _ {1} \dots \phi _ {N} ,\dots $ be a basis for $ V $. Let $ V _ {N} $ be an approximation of the space $ V $ spanned by the first $ N $ basis functions $ \phi _ {1} \dots \phi _ {N} $. The approximate problem is defined as: Find $ u _ {N} \in V _ {N} $ minimizing the functional $ J $ on $ V _ {N} $.
Since $ u _ {N} $ can be written as
$$ u _ {N} = \sum _ {i= 1 } ^ { N } \alpha _ {i} \phi _ {i} , $$
with unknown $ \alpha _ {1} \dots \alpha _ {n} $, the approximate problem reduces to: Find $ \{ \alpha _ {1} \dots \alpha _ {N} \} $ such that $ J( \alpha _ {1} \dots \alpha _ {N} ) $ takes its extremum value.
This leads to the following $ N $( necessary) conditions for the $ N $ unknowns $ \alpha _ {1} \dots \alpha _ {N} $:
$$ \frac{\partial J( \alpha _ {1} \dots \alpha _ {N} ) }{\partial \alpha _ {i} } = 0,\ i = 1 \dots N. $$
The original extremal problem has been reduced to a system of $ N $ equations for $ N $ unknowns, which can be solved numerically. The accuracy of the approximate solution $ u _ {N} $ depends on the number $ N $ of basis functions. Generally, as $ N $ increases the accuracy improves, i.e. $ u _ {N} $ converges to $ u $.
The method that constructs a sequence $ u _ {1,\ } u _ {2,\dots} $ by determining the values of $ \alpha _ {i} $, $ i = 1, 2 \dots $ is called the Ritz method. Inclusion of $ V _ {m} $ in $ V _ {m+ 1 } $ is not essential; for convergence the approximate spaces must satisfy the following density property: $ \cup_{m=1}^ \infty V _ {m} $ dense in $ V $.
The Galerkin method, which can be considered as a generalization of the Ritz method, is directly applicable to the differential equation, irrespective of the existence of an equivalent extremal formulation. Galerkin's method can be illustrated as follows: Consider the following linear second-order differential equation with (essential) Dirichlet boundary conditions (a boundary value problem):
$$ {\mathcal L} u = f \ \mathop{\rm in} \Omega \subset \mathbf R ^ {n} , $$
$$ u = 0 \ \mathop{\rm on} \partial \Omega . $$
The solution is sought in a function space V which is supposed to have a basis $ \{ \phi _ {1} \dots \phi _ {N} ,\dots \} $. Any element $ w \in V $ can be written as an infinite linear combination of these basis functions. For arbitrary $ w \in V $, the expression $ {\mathcal L} w - f = R( w) $ is called the residual. The problem is to find an element $ u \in V $ for which the residual vanishes. This amounts to saying that the solution $ u $ is characterized by the following projection requirements:
$$ \int\limits _ \Omega ( {\mathcal L} u - f ) \phi _ {i} d \Omega = 0,\ \ i = 1, 2,\dots. $$
Reduction of the order of differentiation by Green's formula and substitution of the boundary conditions leads to the following variational (or weak) formulation of the boundary value problem: Find $ u \in V $ such that
$$ a( u, \phi _ {i} ) = \int\limits _ \Omega f \phi _ {i} d \Omega , \ i = 1, 2 \dots $$
where $ a( u, \phi ) $ is a bilinear form on $ V \times V $. When $ {\mathcal L} $ is, for instance, the Laplace operator, $ a( u, \phi ) $ is given by
$$ a( u, \phi ) = \int\limits _ \Omega \mathop{\rm grad} u \cdot \mathop{\rm grad} \phi d \Omega . $$
The Galerkin method consists of taking a finite-dimensional subspace $ V _ {N} $ of $ V $ spanned by $ N $ basis functions $ \phi _ {1} \dots \phi _ {N} $. The approximate finite-dimensional problem is defined as: Find $ u _ {N} \in V _ {N} $ such that
$$ a( u _ {N} , \phi _ {i} ) = \int\limits _ \Omega f \phi _ {i} d \Omega , \ i = 1 \dots N. $$
Since $ u _ {N} \in V _ {N} $, it has the form
$$ u _ {N} = \sum_{j=1} ^ { N } \alpha _ {j} \phi _ {j} , $$
and the approximate problem is equivalent to the following system of $ N $ linear algebraic equations for the unknowns $ \alpha _ {1} \dots \alpha _ {N} $:
$$ \sum _ {j=1} ^ { N } \alpha _ {j} a( \phi _ {i} , \phi _ {j} ) = \ \int\limits _ \Omega f \phi _ {i} d \Omega ,\ i= 1 \dots n . $$
This system can be solved by Gaussian elimination or by iteration.
When the coefficients of the differential operator $ {\mathcal L} $ and the function $ f $ are periodic in space and when sine or cosine basis functions are chosen, the Galerkin method is equivalent to the Fourier method and can be efficiently implemented using a fast Fourier transform (cf. Fourier transform, discrete). The Galerkin method can also be applied to the numerical solution of evolution equations, leading to a system of ordinary differential equations in time which can be solved numerically by integration techniques.
Both the Ritz method and the Galerkin method transform the infinite-dimensional problem into a (finite-dimensional) system of linear equations. The finite-element method is a systematic technique for generating basis functions on arbitrary domains $ \Omega $ which gives rise to a sparse matrix $ A $( i.e. a matrix with many zero entries). The finite-element method consists of three steps:
The first step is to divide the domain $ \overline \Omega \; = \Omega \cup \partial \Omega \subset \mathbf R ^ {n} $ into a finite number of non-overlapping subregions (elements) the length of which does not exceed $ h > 0 $. Standard examples of elements are subintervals ( $ n = 1 $), triangles in $ 2 $ dimensions and tetrahedra in $ 3 $ dimensions.
In the second step, a finite number of points (called nodal points) are chosen in each element. Examples are: begin- and end-point of the subintervals; the vertices of the triangles; or the vertices of the tetrahedra. More complicated examples are begin-, end- and mid-points of subintervals, or vertices and mid-points of sides of triangles. The (finite) set of all nodal points will be denoted by $ {\mathcal P} = \{ P _ {1} \dots P _ {N} \} $.
In the third step, a continuous basis function $ \phi _ {i} $ is defined on $ \overline \Omega \; $ for each nodal point $ P _ {i} \in {\mathcal P} $. These basis functions satisfy the following properties: 1) $ \phi _ {i} $ takes the value $ 1 $ at nodal point $ P _ {i} $ and vanishes at all the other nodal points; and 2) $ \phi _ {i} $ has a prescribed shape on each element (depending on the number of nodal points per element). For example, in the $ n $- dimensional case: linear basis functions with $ n+ 1 $ nodal points per element or quadratic basis functions with $ ( n+ 1)( n+ 2)/2 $ nodal points per element.
Using these basis functions an approximate solution $ u _ {N} $ can be constructed. In fact, the function
$$ u _ {N(} x) = \sum _ {j= 1 } ^ { N } \widetilde{u} _ {j} \phi _ {j} ( x) $$
is a linear combination of the basis functions $ \phi _ {j} $, continuous on $ \overline \Omega \; $, and with prescribed behaviour on the elements. Due to the particular construction of the basis functions, the parameter $ \widetilde{u} _ {j} $ is precisely the value of $ u _ {N} $ at nodal point $ P _ {j} $.
The resulting system of linear equations can be written as
$$ \sum _{j=1} ^ { N } \widetilde{u} _ {j} a( \phi _ {i} , \phi _ {j} ) = \ \int\limits _ \Omega f \phi _ {i} d \Omega ,\ i= 1 \dots n, $$
where the matrix entry $ a( \phi _ {i} , \phi _ {j} ) $ differs from zero only if the nodal points $ P _ {i} $ and $ P _ {j} $ belong to the same element. This implies that the matrix (often called the stiffness matrix) is sparse.
The convergence of $ u _ {N} $ to $ u $ can be proved if the set of elements satisfies certain regularity properties. To improve the accuracy, polynomials of higher degree per element are used. Curved boundaries can be taken into account by using isoparametric elements.
Generalizations and extensions of the finite-element method exist to problems of higher order (for instance, mixed finite elements for the biharmonic equation, cf. Biharmonic function), to problems with specific difficulties (for instance, non-conforming finite elements for the Navier–Stokes equations for incompressible fluids and mixed-hybrid finite elements in stress analysis, electrostatics and ground water hydraulics) or to evolution problems (mass lumping and consistent mass schemes).
References
[a1] | P.G. Ciarlet, "The finite element method for elliptic problems" , North-Holland (1978) |
[a2] | R. Courant, "Variational methods for the solution of problems of equilibrium and vibrations" Bull. Amer. Math. Soc. , 49 (1943) pp. 1–23 |
[a3] | R. Courant, D. Hilbert, "Methods of mathematical physics" , 1–2 , Interscience (1953–1962) (Translated from German) |
[a4] | C. Cuvelier, A. Segal, A.A. van Steenhoven, "Finite element methods and Navier–Stokes equations" , Reidel (1986) |
[a5] | I.M. Gel'fand, S.V. Fomin, "Calculus of variations" , Prentice-Hall (1963) (Translated from Russian) |
[a6] | R. Glowinski, J.-L. Lions, R. Trémolières, "Numerical analysis of variational inequalities" , North-Holland (1981) (Translated from French) |
[a7] | S.G. Mikhlin, "Variational methods in mathematical physics" , Macmillan (1964) (Translated from Russian) |
[a8] | S.G. Mikhlin, "The problem of the minimum of a quadratic functional" , Holden-Day (1965) (Translated from Russian) |
[a9] | S.L. Sobolev, "Applications of functional analysis in mathematical physics" , Amer. Math. Soc. (1963) (Translated from Russian) |
[a10] | G.J. Fix, "An analyse of the finite element method" , Prentice-Hall (1973) |
[a11] | R. Temam, "Navier–Stokes equations, theory and numerical analysis" , North-Holland (1979) |
[a12] | O.C. Zienkiewicz, "The finite element method in engineering science" , McGraw-Hill (1977) |
Variational numerical methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Variational_numerical_methods&oldid=54949