Namespaces
Variants
Actions

Fredholm equation, numerical methods

From Encyclopedia of Mathematics
Revision as of 19:40, 5 June 2020 by Ulf Rehmann (talk | contribs) (tex encoded by computer)
Jump to: navigation, search


Approximation methods for solving Fredholm integral equations of the second kind, amounting to performing a finite number of operations on numbers.

Let

$$ \tag{1 } \phi ( x) - \lambda \int\limits _ { D } K ( x, s) \phi ( s) \ ds = f ( x) $$

be a Fredholm integral equation of the second kind, where $ \lambda $ is a complex number, $ f ( x) $ is a known vector function, $ \phi ( x) $ is an unknown vector function, $ K ( x, s) $ is the kernel of equation (1), and $ D $ is a domain in some $ m $- dimensional Euclidean space. Below it is assumed that $ \lambda $ does not belong to the spectrum of the integral operator with kernel $ K $( that is, for the given $ \lambda $ equation (1) has a unique solution in some class of functions corresponding to the degree of smoothness of $ K $). The expression (1) naturally includes the case of a system of Fredholm equations.

For a general description of the problems of constructing and investigating numerical methods for solving Fredholm equations of the second kind one uses the language of functional analysis. The integral equation (1) can be written as a linear operator equation

$$ \tag{2 } ( E - \lambda A) \phi = f, $$

where $ \phi $ is an unknown element of some Banach space $ \Phi $, $ f $ is a given element of $ \Phi $ and $ A $ is a bounded linear operator from $ \Phi $ to $ \Phi $. The operator $ E - \lambda A $ is assumed to act invertibly from $ \Phi $ to $ \Phi $. The plan of any numerical method for solving (1) consists of the following. Let $ \widetilde \Phi $ be a Banach space in some way associated with $ \Phi $, but, generally speaking, different from it, and let $ \widetilde{A} $ be a linear operator from $ \widetilde \Phi $ to $ \widetilde \Phi $. The equation

$$ \tag{3 } ( E - \lambda \widetilde{A} ) \widetilde \phi = \widetilde{f} $$

is called an approximating equation for (2). The approximating operator $ \widetilde{A} $ is usually chosen in such a way that either it may be possible to compute $ \widetilde \phi $ directly from (3), or (more commonly) that one may find an approximate solution of (3) of the form

$$ \tag{4 } {\widetilde \phi } tilde = \psi ( \widetilde{A} , \widetilde{f} ) $$

such that the right-hand side of (4) may be computed in a finite number of arithmetical operations. The expression $ \psi ( \widetilde{A} , \widetilde{f} ) $ denotes performing certain actions on $ \widetilde{A} $ and $ \widetilde{f} $, in particular $ \psi $ may be a simple operator function of $ \widetilde{A} $( for example, $ \psi ( \widetilde{A} , \widetilde{f} ) = ( E - \lambda \widetilde{A} ) ^ {-} 1 \widetilde{f} $). The choice of $ \widetilde{A} $, $ \psi $ and $ \widetilde{f} $, and also of the space $ \widetilde \Phi $, is subject to the requirement that $ {\widetilde \phi } tilde $ and the exact solution $ \phi $ of (1), (2) be (in some sense) close, and is not, generally speaking, unique. Similarly, for a concrete numerical method (a concrete approximation formula for $ A $) the choice of $ \widetilde \Phi $ is not unique. The concrete choice of $ \Phi $ and $ \widetilde \Phi $ is dictated by the requirements of "closeness" of $ {\widetilde \phi } tilde $ and $ \phi $, and also by the convenience of the investigation. The specific nature of numerical methods for solving Fredholm equations of the second kind is determined mainly by one or the other concrete approximation of $ A $ by means of $ \widetilde{A} $. So usually an approximation method also gives its name to one or another numerical method for solving (1).

After $ \Phi $, $ \widetilde \Phi $, $ \widetilde{A} $, and $ \widetilde{f} $ have been chosen, the closeness of $ \phi $ and $ \widetilde \phi $( $ {\widetilde \phi } tilde $) is established using theorems from the general theory of approximation methods for solving operator equations.

In the case $ \Phi = \widetilde \Phi $, in order to establish that $ \widetilde \phi $ and $ \phi $ are close, it is sufficient to prove that $ \| \widetilde{A} - A \| $ is small. For a suitable choice of $ \Phi $, this can be successfully done for most classical methods for approximately solving Fredholm equations of the second kind.

In the majority of concrete methods the solution of equation (3) can easily be reduced to the solution of a system of linear algebraic equations; in order to construct $ \psi $ in (4), one can make use of some algorithm for solving systems of linear algebraic equations (see Linear algebra, numerical methods in).

The principal methods for constructing approximating operators are:

Quadrature methods and generalizations of them. These are the most-widespread methods for approximating the integral operator $ A $ in (1). The basis of these methods, which can be applied in the case of continuous $ K ( x, s) $, $ \phi ( s) $ and $ f ( x) $, consists in replacing the integral (with respect to $ s $) in (1) by some quadrature formula with respect to a grid of nodes $ \{ s _ {i} \} \in D $.

In this case

$$ \tag{5 } \widetilde{A} y = \ \sum _ {i = 1 } ^ { N } a _ {i} ^ {(} N) K ( x, s _ {i} ) y ( s _ {i} ), $$

where $ a _ {i} ^ {(} N) $ are the coefficients of the quadrature formula.

The approximating equation (3) can be regarded as an operator equation in the same space as the basic equation (1) (for example, in the space $ C ( D) $ of continuous vector functions on $ D $). In this case it will have the form

$$ \tag{6 } \widetilde \phi ( x) - \lambda \sum _ {i = 1 } ^ { N } a _ {i} ^ {(} N) K ( x, s _ {i} ) \widetilde \phi ( s _ {i} ) = f ( x). $$

Equation (6) can be reduced to a system of linear algebraic equations in $ \widetilde \phi ( s _ {i} ) $, $ i = 1 \dots N $:

$$ \tag{7 } \widetilde \phi ( s _ {j} ) - \lambda \sum _ {i = 1 } ^ { N } a _ {i} ^ {(} N) K ( s _ {j} , s _ {i} ) \widetilde \phi ( s _ {i} ) = f( s _ {j} ),\ \ j = 1 \dots N. $$

The solution (exact or approximate) of the system (7) gives $ {\widetilde \phi } tilde $.

Sometimes one takes the equation (7) itself as approximating (1), and then (7) corresponds to equation (3).

In this approach, the space $ \widetilde \Phi $ is not the same as $ \Phi $. The space $ \widetilde \Phi $ may, for example, be identified with the quotient space of $ \Phi $ by the subspace of functions in $ \Phi $ that vanish at the points $ \{ s _ {i} \} _ {i = 1 } ^ {N} $. The method (5) admits various generalizations, which are convenient to use, for example, in the case of a discontinuous $ K ( x, s) $. In these generalized methods the operator $ \widetilde{A} $ has the form

$$ \tag{5'} \widetilde{A} y = \ \sum _ {i = 1 } ^ { N } a _ {i} ^ {(} N) ( x) y ( s _ {i} ), $$

where $ a _ {i} ^ {(} N) ( x) $ are functions associated with the kernel $ K ( x, s) $.

See also Quadrature-sum method.

Methods for replacing the kernel by an approximate one. These make use of an approximating operator $ \widetilde{A} $ of the form

$$ \widetilde{A} y = \ \int\limits _ { D } \widetilde{K} ( x, s) y ( s) ds, $$

where $ \widetilde{K} $ is some function close to $ K $, but simpler. Most often $ \widetilde{K} $ is a degenerate kernel, that is,

$$ \tag{8 } \widetilde{K} ( x, s) = \ \sum _ {i = 1 } ^ { N } a _ {i} ( x) b _ {i} ( s). $$

In this case (3) is an integral Fredholm equation with a degenerate kernel (cf. also Degenerate kernels, method of). Its solution reduces to solving a system of linear algebraic equations. However, the matrix entries of the system of equations obtained will be expressed as integrals of known functions, and for a numerical solution of them one must, generally speaking, approximate them by quadrature sums.

There are many ways of making a concrete choice of $ \widetilde{K} $ by formula (8) (see, for example, Strip method (integral equations)). The theoretical investigation of the closeness of solutions of equations (3) and (1) is usually significantly simpler in these methods than, for example, in quadrature methods, since in most cases one can set $ \widetilde \Phi = \Phi $ and the choice of $ \Phi $ is essentially determined directly from the way the problem is posed. If $ \widetilde{K} $ and $ K $ are close, then, as a rule, $ A $ and $ \widetilde{A} $ are close in the norm of $ \Phi $. However, the practical realization of these methods is in most cases significantly more laborious as compared with quadrature methods and their generalizations.

Projection methods lead to an approximating equation (3) of the form $ \widetilde \phi - \lambda PA \widetilde \phi = Pf $, where $ \widetilde \Phi $ is a subspace of $ \Phi $ and $ P $ is the projection onto this subspace. Freedom in the choice of $ \Phi $, $ \widetilde \Phi $ and even $ P $ itself leads to a large number of concrete projection methods for solving Fredholm integral equations of the second kind. A typical example of a projection method is the Galerkin method. In order to obtain the concrete computational formulas of this method, one must (if possible) treat the integral equation (1) as an operator equation in the Hilbert space $ L _ {2} ( D) $ of functions that are square-integrable in $ D $, and take as $ P $ the orthogonal projection that associates to a function in $ L _ {2} ( D) $ the $ N $- term segment of its Fourier series with respect to some complete orthonormal system of functions $ \{ \psi _ {k} \} $ in $ L _ {2} ( D) $.

In another interpretation, the Galerkin method is equivalent to that of replacing the kernel by a degenerate one of the form

$$ \widetilde{K} ( x, s) = \ \sum _ {i = 1 } ^ { N } \int\limits _ { D } K ( x, s) \phi _ {i} ( x) \ dx \cdot \phi _ {i} ( x) $$

with a simultaneous replacement of the right-hand side by a function close to it:

$$ \widetilde{f} ( s) = \ \sum _ {i = 1 } ^ { N } \int\limits _ { D } f ( s) \phi _ {i} ( s) \ ds \cdot \phi _ {i} ( s). $$

As another example of projection methods one can take the collocation method. If $ K ( x, s) $ and $ f ( x) $ are continuous functions, then (1) can be regarded as an operator equation (2) in $ C ( D) $— the space of continuous functions on $ D $. The collocation method corresponds to a choice of $ P $ in the form

$$ Py = Z ( y),\ \ y \in D, $$

where $ Z $ is the Lagrange interpolation polynomial with respect to some grid of nodes in $ D $( cf. also Lagrange interpolation formula).

In the practical realization of most projection methods applied to a Fredholm integral equation of the second kind there arise difficulties of the additional approximation of the integrals that appear, which also makes these methods (just like the methods of replacing the kernel by an approximate one) usually more laborious as compared with typical quadrature methods. However, this assertion is relative, since the actual classification of the methods is a matter of convention. For example, the collocation method can be interpreted both as a projection method and as a generalized quadrature method.

Methods for solving the approximating equations. Usually, the solution of an approximating equation (3) reduces to solving a system of linear algebraic equations. One can use methods of successive approximations (the simplest of them) for relatively small $ | \lambda | $, and with necessary modifications (for example, in the method of averaging functional corrections) one can use them for all $ \lambda $ not belonging to the spectrum of $ A $.

Obtaining a sequence of more accurate approximations. In the theoretical investigation of some numerical method one succeeds in most cases in establishing only the plain fact that the approximations $ \widetilde \phi $ and $ {\widetilde \phi } tilde $ converge to a solution of (1) and (2) as $ \widetilde{A} $ converges to $ A $ in some sense, and it is extremely rare that one manages to obtain effective estimates of the closeness of $ \widetilde \phi $ and $ {\widetilde \phi } tilde $ to a solution of (1) and (2). To verify the accuracy, one uses in practice a sequence of approximate solutions of (3) with an increasingly-accurate operator $ \widetilde{A} $. In the simplest version of this verification one compares two neighbouring terms in this sequence of approximate solutions and stops obtaining further approximations when the two preceding ones are equal to a given accuracy. The unwieldiness of directly obtaining the terms of such a sequence has been partially surmounted in a variety of algorithms for iteratively obtaining more-accurate approximate solutions. A typical example of such algorithms is the following. If the sequence $ \{ A _ {n} \} $ of approximating operators converges in the norm of some Banach function space $ \Phi $ to $ A $ in (2), then the iterative procedure

$$ \tag{9 } ( E - \lambda A _ {0} ) \phi _ {n + 1 } = \ \lambda ( A _ {n + 1 } - A _ {0} ) \phi _ {n} + f _ {n} $$

gives a sequence of approximations $ \phi _ {n} $ that converge to $ \phi $, if $ f _ {n} $ converges to $ f $ in norm and if $ A _ {0} $ is sufficiently close to $ A $ in norm. In using the sequence (9) one requires only one inverse of an operator. The closer $ A _ {0} $ is to $ A $ in norm, the better the convergence is. It is convenient, for example, to choose operators in the form (5). One can in this case establish that $ \phi _ {n} $ converges uniformly to $ \phi $ in $ D $, provided that the kernel satisfies certain requirements.

Comments

For some concrete methods, the requirement that $ \| A - \widetilde{A} \| $ should be small is too strong. Instead, one typically has that (2) is approximated by a sequence $ ( E - \lambda A _ {n} ) \phi _ {n} = f _ {n} $, where $ ( A _ {n} ) $ convergences pointwise to $ A $ and is collectively compact (i.e. for all bounded sets $ B $, $ {\cup _ {n \in \mathbf N } A _ {n} ( B) } bar $ is compact). Under these assumptions one can prove convergence of the approximate solutions $ \phi _ {n} $ and give error bounds. An example where this theory applies is the Nyström method, which consists of solving (7) and then using (6) for extrapolation to all of $ D $. For the theory of collectively-compact operators and its use in connection with integral equations see [a1].

In collocation methods one can also use interpolation functions other than polynomials, e.g. splines ( "spline collocationspline collocation" ). Because of the better convergence properties of splines (cf. Spline approximation), spline collocation is superior to polynomial collocation.

A variety of concrete numerical methods for Fredholm equations of the second kind are described in [a2] (with Fortran programs), [a3] and [a4]. About numerical methods for solving Fredholm equations if $ \lambda \neq 0 $ is an eigen value see [a5], pp. 368ff.

The article above discusses Fredholm integral equations of the second kind only. However, there is also a theory for Fredholm integral equations of the first kind.

Fredholm integral equations of the first kind.

These are equations of the form

$$ \tag{a1 } ( A \phi ) ( s) = \int\limits _ { D } K ( x , s ) \phi ( s) d s = f ( x) . $$

They are usually ill-posed in the sense that their solution might not exist, not be unique, and will (if it exists) in general depend on $ f $ in a discontinuous way (see Ill-posed problems). The notion of solution one commonly uses is the "best-approximate solution" $ A ^ {+} \phi $, where $ A ^ {+} $ is the Moore–Penrose generalized inverse of $ A $( see Fredholm equation). $ A ^ {+} $ is in general unbounded. Thus, a "naive" numerical approximation of (a1), e.g. by discretization, will lead to ill-conditioned linear systems. For solving (a1) numerically one uses "regularization methods" (see Regularization method). In general, regularization for (a1) is the replacement of the unbounded operator $ A ^ {+} $ by a parameter-dependent family of bounded linear operators $ R _ \alpha $( $ \alpha > 0 $). The "regularized solution" computed with a specific "regularization operator" $ R _ \alpha $ and noisy data $ f _ \alpha $( $ \| f - f _ \delta \| \leq \delta $) is then $ \phi _ \alpha ^ \delta = R _ \alpha f _ \delta $. The most popular choice for $ R _ \alpha $ is $ ( \alpha I + A ^ {*} A ) ^ {-} 1 A ^ {*} $( Tikhonov regularization). Other methods are iterated Tikhonov regularization, iterative methods like Landweber iteration, and truncated singular value expansion, where

$$ \phi _ \alpha ^ \delta = \ \sum _ {\begin{array}{c} n= 1 \\ {\sigma _ {n} > \alpha } \end{array} } ^ \infty \frac{\langle f _ \delta , v _ {n} \rangle }{\sigma _ {n} } u _ {n} . $$

Here, $ \langle \cdot , \cdot \rangle $ denotes the inner product in $ L _ {2} $, and $ \langle \sigma _ {n} ; u _ {n} , v _ {n} \rangle $ is a singular system for $ A $( i.e. the $ \sigma _ {n} ^ {2} $ are the eigen values of $ A ^ {*} A $, $ \{ u _ {n} \} $ is a corresponding orthonormal set of eigen vectors, $ \sigma _ {n} v _ {n} = A u _ {n} $).

Convergence rates for regularization methods depend on a priori smoothness assumptions on the data, which also influence the choice of the regularization parameter $ \alpha $. For these questions and details about regularization methods see e.g. [a6], [a7].

For numerical realization, regularization methods have to be combined with projection methods; the latter can also be used directly for regularization (see [a8]).

References

[a1] P.M. Anselone, "Collectively compact operator approximation theory and applications to integral equations" , Prentice-Hall (1971)
[a2] K.E. Atkinson, "A survey of numerical methods for the solution of Fredholm integral equations of the second kind" , SIAM (1976)
[a3] C.T.H. Baker, "The numerical treatment of integral equations" , Clarendon Press (1977) pp. Chapt. 4
[a4] L.M. Delves, J.L. Mohamed, "Computational methods for integral equations" , Cambridge Univ. Press (1985)
[a5] M.Z. Nashed (ed.) , Genealized inverses and applications , Acad. Press (1976)
[a6] H.W. Engl (ed.) C.W. Groetsch (ed.) , Inverse and ill-posed problems , Acad. Press (1987)
[a7] C.W. Groetsch, "The theory of Tikhonov regularization for Fredholm equations of the first kind" , Pitman (1984)
[a8] F. Natterer, "The finite element method for ill-posed problems" RAIRO Anal. Numer. , 11 (1977) pp. 271–278
[a9] L.V. Kantorovich, V.I. Krylov, "Approximate methods of higher analysis" , Noordhoff (1958) (Translated from Russian)
[a10] I.C. [I.Ts. Gokhberg] Gohberg, I.A. Feld'man, "Convolution equations and projection methods for their solution" , Transl. Math. Monogr. , 41 , Amer. Math. Soc. (1974) (Translated from Russian)
[a11] P.P. Zabreiko (ed.) A.I. Koshelev (ed.) M.A. Krasnoselskii (ed.) S.G. Mikhlin (ed.) L.S. Rakovshchik (ed.) V.Ya. Stet'senko (ed.) T.O. Shaposhnikova (ed.) R.S. Anderssen (ed.) , Integral equations - a reference text , Noordhoff (1975) (Translated from Russian)
How to Cite This Entry:
Fredholm equation, numerical methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Fredholm_equation,_numerical_methods&oldid=46978
This article was adapted from an original article by A.B. Bakushinskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article