Namespaces
Variants
Actions

Difference between revisions of "Talk:Tikhonov-Phillips regularization"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Replaced norm subscript of 1,2 with H_2, H_2, to avoid confusion with L_1, L_2 norms)
(Cleaned out Talk page)
Line 1: Line 1:
{{MSC|}}
 
{{TEX|done}}
 
  
''Phillips–Tikhonov regularization''
 
 
 
$
 
\newcommand{\norm}[1]{\left\| #1 \right\|}
 
\newcommand{\order}[1]{o\bigl(#1\bigr)}
 
\newcommand{\Order}[1]{O\bigl(#1\bigr)}
 
$
 
 
Fredholm integral equations of the first kind (cf. also [[Fredholm equation]]),
 
$$
 
\int_a^b k(s,t) f(t) \rd t = g(s),
 
$$
 
present special challenges to working scientists (cf. also [[Fredholm equation, numerical methods]]). Unlike equations of the second kind, for which the existence of a unique stable solution can be guaranteed under mild conditions, integral equations of the first kind are typically ill-posed (cf. also [[Ill-posed problems]]). The blame for this misbehaviour lies with the kernel $k(\cdot,\cdot)$. The data function $g$ inherits some of the structure and smoothness of the kernel and hence, if the kernel is highly structured or very smooth, then a solution exists only for functions $g$ belonging to a quite exclusive class of data. Furthermore, null functions for the kernel may be plentiful, so when solutions exist, they are seldom unique. But the most troubling aspect of the first-kind equation is its instability: perturbations in $g$ which are very small (with respect to reasonable metrics on the data space) may arise from very large changes in $f$. This instability is a consequence of the Riemann–Lebesgue lemma (cf.  [[Fourier series]]) or, more generally, from the decay to zero of the singular values of the kernel.
 
 
The basic existence and uniqueness theory for Fredholm equations of the first kind with square-integrable kernels was worked out by E. Picard {{Cite|Pi}} in the early 1900s. Toward the 1950s, iterative approximation methods for Fredholm equations of the first kind were developed (see, e.g., {{Cite|Fr}}, {{Cite|La}}), but soon thereafter it became apparent that when these methods are applied to practical problems, the approximations are cursed by the instability noted above (see, e.g., {{Cite|Do}}, {{Cite|Mc}}).  This led to the development, in the early 1960s, of a non-iterative stabilized approximation technique, now called the method of regularization, by D.L. Phillips {{Cite|Ph}} and A.N. Tikhonov {{Cite|Ti}}.
 
 
The analysis of the method of regularization is best carried out in the context of a [[Hilbert space|Hilbert space]], where the integral operator is modeled by a [[Compact operator|compact operator]] $K:H_1 \rightarrow H_2$ from a Hilbert space $H_1$ into a Hilbert space $H_2$. The norm of $H_2$ should be sufficiently weak to accommodate realistic data functions, while that of $H_1$ should be strong enough to impose desirable structure on the solution. In the method of regularization a "damped least-squares" approximate solution of $Kf=g$ is sought by minimizing the functional
 
$$
 
F_\alpha(f;g) = \norm{Kf - g}_{H_2}^2 + \alpha \norm{f}_{H_1}^2,
 
$$
 
where $\alpha > 0$ is a regularization parameter. The minimizer $f_\alpha$ of this functional is the solution of the well-posed second-kind equation
 
$$
 
K^*Kf_\alpha + \alpha f_\alpha = K^*g
 
$$
 
(cf. also [[Integral equation]]) and hence is unique and stable with respect to perturbations of the data function $g$. The regularization parameter may be viewed as an arbiter between fidelity and stability in the approximate solution; a choice of $\alpha$ that is too small causes the approximate solution to inherit some of the instability of the original ill-posed problem, while too large a choice tends to over-smooth the approximate solution with consequent loss of information.
 
 
A key point in practical problems is that the data function is the result of observations and hence is subject to error. The data function in hand is then an observation $g^\delta$ satisfying $\norm{g-g^\delta}_{H_2} \leq \delta$, where $\delta$ is a bound for the observational error. One then takes the minimizer $f_\alpha^\delta$ of the functional $F_\alpha(\,\cdot\,;g^\delta)$ as an approximate solution. If $\alpha = \alpha(\delta)$ is chosen so that $\delta = \order{\alpha(\delta)}$, then $\norm{f_{\alpha(\delta)}^\delta - f}_{H_1} \rightarrow 0$ as $\delta \rightarrow 0$, where $f$ is the minimal-norm least-squares solution of $Kf = g$ (cf. also [[Least squares, method of]]). If $f$ is in the range of $K^*K$ and $\alpha(\delta) = C\delta^{2/3}$, then $\norm{f_{\alpha(\delta)}^\delta - f}_{H_1} = \Order{\delta^{2/3}}$, and this order is best possible (see {{Cite|Gr}}, p. 41).
 
 
V.A. Morozov {{Cite|Mo}} introduced an ''a posteriori'' parameter choice strategy (already used informally by D.L. Phillips {{Cite|Ph}}) called the discrepancy principle. According to this principle, there is a unique $\alpha(\delta)$ satisfying $\norm{Kf_{\alpha(\delta)}^\delta - g^\delta}_{H_2} = \delta$ and $\norm{f_{\alpha(\delta)}^\delta - f}_{H_1} \rightarrow 0$ as $\delta \rightarrow 0$. Furthermore, if $f$ is in the range of $K^*$, then $\norm{f_{\alpha(\delta)}^\delta - f}_{H_1} = \Order{\sqrt{\delta}}$, but this order is best possible {{Cite|Gr}}, p. 50. Discrepancy principles that attain the optimal order $\Order{\delta^{2/3}}$ have been devised by T. Raus, H. Gfrerer and H. Engl (see {{Cite|EnHaNe}}). The $\Order{\delta^{2/3}}$ brick wall for ordinary Tikhonov regularization can be scaled by using an iterated version of the regularization method. Iterated Tikhonov regularization consists of sequentially minimizing the functionals
 
$$
 
F_n(x;g^\delta) =
 
\norm{Kx - g^\delta}_{H_2}^2 + \alpha_n \norm{x_{n-1} - x}_{H_1}^2,
 
$$
 
where $x_{n-1}$ is the minimizer of $F_{n-1}(\,\cdot\,;g^\delta)$ and $\{\alpha_n\}$ is a suitable sequence of regularization parameters. Under appropriate conditions, this method attains an order of approximation $\Order{\delta^p}$ for any $p \in [0,1)$ (see {{Cite|HaGr}}).
 
 
The computational implementation of the method of regularization requires a discretization procedure to produce a finite-dimensional problem. The interpretation of the method as a variational technique suggests finite-element methods, while its interpretation as an integral equation of the second kind suggests quadrature or degenerate-kernel methods (cf. also [[Degenerate kernel]]; [[Quadrature]]). In any case, a successful computational experience relies on a delicate balancing of regularization norm, regularization parameter, discretization parameters, and characteristics of the data error. For more on discretized ill-posed problems, see {{Cite|Ha}}.
 
 
In Tikhonov's original work, the regularizing norm $\norm{\cdot}_{H_1}$ was taken to be a Sobolev norm (see [[Sobolev space]]) and the data norm $\norm{\cdot}_{H_2}$ was the $L^2$-norm. Therefore, the Euler equation characterizing the regularized approximation is an [[Integro-differential equation|integro-differential equation]] and the convergence of the approximations is uniform. Phillips, however, used a regularizing [[Semi-norm|semi-norm]] (the $L^2$-norm of the second derivative) and hence the Phillips method requires a more detailed analysis than Tikhonov's method (see {{Cite|Lo}}, {{Cite|Mo2}}).
 
 
====References====
 
{|
 
|-
 
|valign="top"|{{Ref|Do}}||valign="top"| J. Douglas Jr., "Mathematical programming and integral equations", ''Symp. Numerical Treatment of Ordinary Differential Equations, Integral and Integro-differential Equations (Rome, 1960)'', Birkhäuser (1960) pp. 269–274
 
|-
 
|valign="top"|{{Ref|EnHaNe}}||valign="top"| H.W. Engl, M.  Hanke, A. Neubauer, "Regularization of inverse problems", Kluwer Acad. Publ.  (1996)
 
|-
 
|valign="top"|{{Ref|Fr}}||valign="top"| V.M. Fridman, "Method of successive approximations for Fredholm integral equations of the first kind" ''Uspekhi Mat. Nauk'', '''11''' (1956) pp. 233–234 (In Russian)
 
|-
 
|valign="top"|{{Ref|Gr}}||valign="top"| C.W. Groetsch, "The theory of Tikhonov regularization for Fredholm equations of the first kind", Pitman (1984)
 
|-
 
|valign="top"|{{Ref|Ha}}||valign="top"| P.C. Hansen, "Rank-deficient and discrete ill-posed problems", SIAM (Soc. Industrial Applied Math.)  (1998)
 
|-
 
|valign="top"|{{Ref|HaGr}}||valign="top"| M. Hanke, C.W. Groetsch, "Nonstationary iterated Tikhonov regularization" ''J. Optim. Th. Appl.'', '''98''' (1998) pp.  37–53
 
|-
 
|valign="top"|{{Ref|La}}||valign="top"| L. Landweber, "An iteration formula for Fredholm integral equations of the first kind" ''Amer. J. Math.'', '''73''' (1951) pp.  615–624
 
|-
 
|valign="top"|{{Ref|Lo}}||valign="top"| A.K. Louis, "Inverse und schlecht gestellte Probleme", Teubner (1989)
 
|-
 
|valign="top"|{{Ref|Mc}}||valign="top"| R.W. McKelvey, "An analysis of approximative methods for Fredholm's integral equation of the first kind" ''Techn. Rep. David Taylor Model Basin, U.S. Navy Dep., Washington, D.C.'', '''19''' (1956)
 
|-
 
|valign="top"|{{Ref|Mo}}||valign="top"| V.A. Morozov, "On the solution of functional equations by the method of regularization" ' 'Soviet Math. Dokl.'', '''7''' (1966) pp.  414–417
 
|-
 
|valign="top"|{{Ref|Mo2}}||valign="top"| V.A. Morozov, "Regularization methods for ill-posed problems", CRC (1993)
 
|-
 
|valign="top"|{{Ref|Ph}}||valign="top"| D.L. Phillips, "A technique for the numerical solution of certain integral equations of the first kind" ''J. Assoc. Comput. Mach.'', '''9''' (1962) pp.  84–97
 
|-
 
|valign="top"|{{Ref|Pi}}||valign="top"| E. Picard, "Sur un théorème générale relatif aux équations intégrales de première espèce et sur quelques problèmes de physique mathématique" ''Rend.  Circ. Mat. Palermo'', '''29''' (1910) pp.  79–97
 
|-
 
|valign="top"|{{Ref|Ti}}||valign="top"| A.N. Tikhonov, "Solution of incorrectly formulated problems and the regularization method" ''Soviet Math. Dokl.'', '''4''' (1963) pp. 1035–1038
 
|-
 
|}
 

Revision as of 15:52, 25 April 2012

How to Cite This Entry:
Tikhonov-Phillips regularization. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Tikhonov-Phillips_regularization&oldid=25413