Namespaces
Variants
Actions

Regularization method

From Encyclopedia of Mathematics
Jump to: navigation, search


A method for constructing approximate solutions of ill-posed problems in the following way: As approximate solution of an ill-posed problem (also called an incorrectly-posed problem) one takes the values of a regularizing operator with regard to the approximate nature of the initial data.

For the sake of being specific, consider the problem of finding the solution of a functional equation of the form $ A z = u $, in which $ z $ and $ u $ are elements of metric spaces $ F $ and $ U $ with distance functions $ \rho _ {F} ( , ) $ and $ \rho _ {U} ( , ) $, respectively. If, for example, $ A $ is a completely-continuous operator, then the solution of such an equation need not be stable with respect to small changes of the right-hand side $ u $. Suppose that, instead of the exact values $ ( \overline{A}\; , \overline{u}\; ) $ of the initial data, approximations $ ( \widetilde{A} , \widetilde{u} ) $ are given. In these circumstances one can only ask for approximations to the solution $ \overline{z}\; $ of the equation $ \overline{A}\; z = \overline{u}\; $. As an approximate solution of an ill-posed problem of this type, with approximate initial data $ ( \widetilde{A} , \widetilde{u} ) $, one cannot take an exact solution of the equation $ \widetilde{A} z = \widetilde{u} $, since such a solution need not exist, and even if it does, it need not be stable with respect to small changes of the initial data, so that such a "solution" may not admit a physical interpretation. In what follows, it is assumed for simplicity that only the right-hand side $ u $ is approximate, and that the operator $ A $ is specified exactly.

Let $ \delta $ be a bound on the deviation of $ \widetilde{u} $ from $ \overline{u}\; $, that is, on the distance $ \rho _ {U} ( \widetilde{u} , \overline{u}\; ) $, and let $ R _ {0} \subset F $ be a given class of possible solutions (comparison models). It is natural to search for approximate solutions of the equation $ A z = \widetilde{u} $ among the elements $ z \in F _ {0} $ compatible with the initial data, that is, such that $ \rho _ {U} ( A z , \widetilde{u} ) \leq \delta $. Let $ F _ \delta $ be the set of all such elements from $ F _ {0} $. If, in the chosen class $ F _ {0} $ of possible solutions, there are no elements (for example, functions $ z ( s) $) compatible with the initial data, then this means that the elements $ z \in F _ {0} $ have an over-simplified (too rough) structure. In this case it is necessary to extend $ F _ {0} $ by taking, if necessary, an increasing sequences of classes $ F _ {0} \subset F _ {1} \subset \dots $ until a class $ F _ {n} $ is reached that contains elements (for example, functions) that are compatible with the initial data.

If $ F _ {n} $ is not empty, then it may contain elements (functions) that are essentially different from one another. In such cases, the single requirement of compatibility of a possible solution with the initial data cannot serve as the only criterion for finding a well-defined approximate solution of the equation $ A z = \widetilde{u} $, as there are insufficient grounds for the choice of an approximate solution from among the compatible elements of $ F _ {n} $.

For stable solutions to be well-defined, one needs a certain principle for selecting solutions compatible with $ \widetilde{u} $. The formulation of this usually varies in accordance with the nature of the problem. Such a selection can be made, for example, on the principle of choosing an element (functions in $ F _ {n} $) that has minimal complexity. The notion of complexity of an element $ z $ can be formalized, for example, using complexity functionals $ \Omega [ z ] $— continuous, non-negative functionals satisfying certain special requirements (see [1]). As a measure of the complexity of an element $ z $ one takes the value of the functional $ \Omega [ z ] $. Thus, if the elements $ z $ are continuous functions $ z ( s) $ on an interval $ [ a , b ] $ and belong to the class $ W _ {2} ^ {1} $, then one can take the complexity functional $ \Omega [ z ] $ to be

$$ \Omega [ z ] = \ \int\limits _ { a } ^ { b } \left \{ z ^ {2} + \left ( \frac{dz}{ds} \right ) ^ {2} \right \} d s . $$

The search for approximate solutions of the equation $ A z = \widetilde{u} $ among the simplest elements (functions) that are compatible with the initial data leads to the problem of finding an element of $ F _ {0} $ that minimizes $ \Omega [ z ] $ on $ F _ \delta $. If $ A $ is a linear operator and if $ \Omega [ z ] $ does not have local minima in its domain of definition $ F _ \Omega $, then this problem can be reduced to that (see [1] for some details) of finding an element $ z _ \alpha $ in the set $ F _ \delta \cap F _ \Omega $ that minimizes the functional

$$ M ^ \alpha [ z , A , \widetilde{u} ] = \rho _ {U} ^ {2} ( A z , \widetilde{u} ) + \alpha \Omega [ z ] . $$

The value of the parameter $ \alpha $( the regularization parameter) must be chosen in accordance with the level of error in the initial data. For example, it can be defined in terms of discrepancy, that is, by the condition $ \rho _ {U} ( A z _ \alpha , \widetilde{u} ) = \delta $, for a given $ \delta $. But there are other ways of determining $ \alpha $( see [1]). Thus, $ \alpha $ may depend on $ \delta $ and $ \widetilde{u} $, $ \alpha = \alpha ( \delta , \widetilde{u} ) $. The element $ z _ {\alpha ( \delta , \widetilde{u} ) } $ is then taken as an approximate solution of the equation $ A z = \widetilde{u} $. This is also one of the forms of the regularization method developed in [2], [3]. In a similar way one can formulate approximate solutions of the equation $ \widetilde{A} z = \widetilde{u} $ in case both $ \widetilde{A} $ and $ \widetilde{u} $ are approximately given. In this case a functional of the type $ M ^ \alpha [ z , \widetilde{A} , \widetilde{u} ] $ is minimized (see [1], for example). There are other possible forms of the regularization method, and applications to other classes of problems (see [1]). Regularization methods have also been developed for solving non-linear problems (see [1], [4]).

References

[1] A.N. Tikhonov, V.I. [V.I. Arsenin] Arsenine, "Solution of ill-posed problems" , Wiley (1977) (Translated from Russian)
[2] A.N. Tikhonov, "Solution of incorrectly formulated problems and the regularization method" Soviet Math. Dokl. , 4 : 4 (1963) pp. 1035–1038 Dokl. Akad. Nauk SSSR , 151 : 3 (1963) pp. 501–504
[3] A.N. Tikhonov, "Regularization of incorrectly posed problems" Soviet Math. Dokl. , 4 : 6 (1963) pp. 1624–1627 Dokl. Akad. Nauk SSSR , 153 : 1 (1963) pp. 49–52
[4] M.M. [M.M. Lavrent'ev] Lavrentiev, "Some improperly posed problems of mathematical physics" , Springer (1967) (Translated from Russian)

Comments

For other regularization methods and their convergence properties see (the editorial comments to) ill-posed problems. That article also contains a discussion of "inverse problems" arising in various applications giving rise to ill-posed problems, and many references. The spectral-theoretic methods discussed there also include some iterative methods. Another iterative method that can be considered as a regularization method is the conjugate gradient method (cf. Conjugate gradients, method of); it is discussed from this point of view in the articles by H. Brakhage and L. Louis in [a2] (see also [a6]). Convergence and convergence rates for Tikhonov regularization of non-linear ill-posed problems are considered in some papers in Volume $ \mathbf 5 $( 1989) of the journal "Inverse Problems" , which in general contains many articles about regularization methods.

A different approach to regularization is based on statistical considerations (see [a1], [a5]) and is rooted in the method of ridge regression, [a4]. A well-known parameter choice strategy for statistical regularization is generalized cross-validation (see e.g. the article by G. Wahba in [a2] and the references quoted there).

For the use of reproducing-kernel Hilbert spaces in regularization see [a3] and [a7]. A practical comparison of various regularization methods is made in [a8].

References

[a1] M. Bertero, G. Viano, "On probabilistic methods for the solution of improperly posed problems" Boll. Un. Mat. Ital. , 15-B (1978) pp. 483–508
[a2] H.W. Engl (ed.) C.W. Groetsch (ed.) , Inverse and ill-posed problems , Acad. Press (1987)
[a3] J. Hilgers, "On the equivalence of regularization and certain reproducing kernel Hilbert space approaches for solving first kind problems" SIAM J. Numer. Anal. , 13 (1976) pp. 172–184
[a4] A. Hoerl, R. Kennard, "Ridge regression" Technometrics , 12 (1970) pp. 55–82
[a5] B. Hofmann, "Regularization for applied inverse and ill-posed problems" , Teubner (1986)
[a6] A.K. Louis, "Inverse und schlecht gestellte Probleme" , Teubner (1989)
[a7] M.Z. Nashed, G. Wahba, "Convergence rates of approximate least squares solutions of linear integral and operator equations of the first kind" Math. Comp. , 28 (1974) pp. 69–80
[a8] J. Varah, "A practical examination of some numerical methods for linear discrete ill-posed problems" SIAM Rev. , 21 (1979) pp. 100–111
How to Cite This Entry:
Regularization method. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Regularization_method&oldid=48490
This article was adapted from an original article by V.Ya. ArseninA.N. Tikhonov (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article