Namespaces
Variants
Actions

Difference between revisions of "Alternating algorithm"

From Encyclopedia of Mathematics
Jump to: navigation, search
(Importing text file)
 
m (AUTOMATIC EDIT (latexlist): Replaced 66 formulas out of 66 by TEX code with an average confidence of 2.0 and a minimal confidence of 2.0.)
 
Line 1: Line 1:
An algorithm first proposed by J. von Neumann in 1933 [[#References|[a6]]]. It gives a method for calculating the orthogonal projection <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a1302301.png" /> onto the intersection of two closed subspaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a1302302.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a1302303.png" /> in a [[Hilbert space|Hilbert space]] <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a1302304.png" /> in terms of the orthogonal projections <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a1302305.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a1302306.png" /> (cf. also [[Orthogonal projector|Orthogonal projector]]). The result is that
+
<!--This article has been texified automatically. Since there was no Nroff source code for this article,
 +
the semi-automatic procedure described at https://encyclopediaofmath.org/wiki/User:Maximilian_Janisch/latexlist
 +
was used.
 +
If the TeX and formula formatting is correct, please remove this message and the {{TEX|semi-auto}} category.
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a1302307.png" /></td> </tr></table>
+
Out of 66 formulas, 66 were replaced by TEX code.-->
 +
 
 +
{{TEX|semi-auto}}{{TEX|done}}
 +
An algorithm first proposed by J. von Neumann in 1933 [[#References|[a6]]]. It gives a method for calculating the orthogonal projection $P _ { U \cap V }$ onto the intersection of two closed subspaces $U$ and $V$ in a [[Hilbert space|Hilbert space]] $H$ in terms of the orthogonal projections $P : H \rightarrow U$ and $Q : H \rightarrow V$ (cf. also [[Orthogonal projector|Orthogonal projector]]). The result is that
 +
 
 +
\begin{equation*} \operatorname { lim } _ { n \rightarrow \infty } ( P Q ) ^ { n } f = P _ { U \bigcap V }\, f  \text { for all } f \in H . \end{equation*}
  
 
It can be equivalently formulated as
 
It can be equivalently formulated as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a1302308.png" /></td> <td valign="top" style="width:5%;text-align:right;">(a1)</td></tr></table>
+
\begin{equation} \tag{a1} \operatorname { lim } _ { n \rightarrow \infty } ( ( 1 - Q ) ( I - P ) ) ^ { n } f = ( I - P _ { \overline{U + V} } ) f \end{equation}
  
for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a1302309.png" />. Here, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023010.png" /> is the orthogonal projection of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023011.png" /> onto the subspace <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023012.png" />. Since it was first proposed, this algorithm has undergone many generalizations, mainly concerning the kind of spaces in which the algorithm can be located. It occurs in a large number of practical applications, such as domain decomposition methods for solving linear systems of equations, and certain multi-grid schemes used for solving elliptic partial differential equations. For a survey account of a wide collection of applications, see [[#References|[a2]]].
+
for all $f \in H$. Here, $P _ {\overline{U+V}}$ is the orthogonal projection of $H$ onto the subspace $U + V$. Since it was first proposed, this algorithm has undergone many generalizations, mainly concerning the kind of spaces in which the algorithm can be located. It occurs in a large number of practical applications, such as domain decomposition methods for solving linear systems of equations, and certain multi-grid schemes used for solving elliptic partial differential equations. For a survey account of a wide collection of applications, see [[#References|[a2]]].
  
The algorithm easily admits a generalization to a finite number of subspaces of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023013.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023014.png" /> be a member of the Hilbert space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023015.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023016.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023017.png" />, be closed subspaces of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023018.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023019.png" />, and let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023020.png" /> be the orthogonal projection of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023021.png" /> onto <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023022.png" />. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023023.png" /> be the orthogonal projection onto <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023024.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023025.png" />. Given <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023026.png" />, define <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023027.png" /> by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023028.png" />, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023029.png" />. The elements <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023030.png" /> are the iterates in the alternating algorithm and the analogous convergence result is that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023031.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023032.png" />. Quite a while later, other authors became interested in the rate of convergence of this algorithm. It was verified by N. Aronszain [[#References|[a1]]] in the case of only two subspaces that the rate of convergence is usually linear. That is, there is a constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023033.png" /> such that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023034.png" /> for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023035.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023036.png" />. The number <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023037.png" /> depends on the notion of an angle between two subspaces of a Hilbert space. The angle <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023038.png" /> between <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023039.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023040.png" /> is given by
+
The algorithm easily admits a generalization to a finite number of subspaces of $H$. Let $f$ be a member of the Hilbert space $H$, and let $U_i$, $i = 1 , \dots , n$, be closed subspaces of $H$. Let $U = \cap _ { i = 1 } ^ { n } U _ { i }$, and let $P$ be the orthogonal projection of $H$ onto $U$. Let $P _ { i } : H \rightarrow U _ { i }$ be the orthogonal projection onto $U_i$, $i = 1 , \dots , n$. Given $f \in H$, define $\{ f _ { \text{l} } \} _ { \text{l} = 1 } ^ { \infty }$ by $f _ { l } = ( P _ { n } \ldots P _ { 1 } ) ^ { \text{l} } f$, for $\operatorname{l} = 1,2 , \ldots$. The elements $f _ {  \operatorname{l} }$ are the iterates in the alternating algorithm and the analogous convergence result is that $\| f _ { \text{l} } - P f \| \rightarrow 0$ as $1 \rightarrow \infty$. Quite a while later, other authors became interested in the rate of convergence of this algorithm. It was verified by N. Aronszain [[#References|[a1]]] in the case of only two subspaces that the rate of convergence is usually linear. That is, there is a constant $c &lt; 1$ such that $\| f _ { \operatorname{l} } - P _ { U \cap V  } f \| \leq c ^ { 2 \operatorname{l} - 1 } \| f \|$ for all $f$ in $H$. The number $c$ depends on the notion of an angle between two subspaces of a Hilbert space. The angle $\alpha$ between $U$ and $V$ is given by
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023041.png" /></td> </tr></table>
+
\begin{equation*} \operatorname { cos } \alpha = \operatorname { sup } \left\{ \begin{array} { r l } {} &amp;{ u \in U \bigcap V ^ { \perp }, } \\ { \langle u , v \rangle : } &amp; { v \in V \cap U ^ { \perp }, } \\{} &amp; { \| u \| , \| v \| \leq 1 } \end{array} \right\}. \end{equation*}
  
In the present context, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023042.png" />. In [[#References|[a7]]], K.T. Smith, D.C. Solman and S.L. Wagner gave the analogous result for more than two subspaces. The convergence constant <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023043.png" /> remains a function of the angles between the subspaces, but is only a little more complicated than the case of two subspaces. Because of the importance of the rate of convergence, later authors have given improved rates. To get a very good expression for the rate of convergence, one needs to be very much more careful over the angles specified in the Smith–Solman–Wagner result. The best result know today (2000) is given in [[#References|[a5]]].
+
In the present context, $c = \operatorname { cos } \alpha$. In [[#References|[a7]]], K.T. Smith, D.C. Solman and S.L. Wagner gave the analogous result for more than two subspaces. The convergence constant $c$ remains a function of the angles between the subspaces, but is only a little more complicated than the case of two subspaces. Because of the importance of the rate of convergence, later authors have given improved rates. To get a very good expression for the rate of convergence, one needs to be very much more careful over the angles specified in the Smith–Solman–Wagner result. The best result know today (2000) is given in [[#References|[a5]]].
  
The Lorch theorem states that the angle between two subspaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023044.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023045.png" /> of a Hilbert space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023046.png" /> is strictly positive if and only if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023047.png" /> is a closed subspace of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023048.png" />. So if the sum is not closed, the result of Aronszain does not provide linear convergence, and it is of interest to enquire what takes place in this situation. A result of C. Franchetti and W.A. Light [[#References|[a4]]] shows that if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023049.png" /> is not closed, then the rate of convergence of the alternating algorithm may be arbitrarily slow.
+
The Lorch theorem states that the angle between two subspaces $U$ and $V$ of a Hilbert space $H$ is strictly positive if and only if $U + V$ is a closed subspace of $H$. So if the sum is not closed, the result of Aronszain does not provide linear convergence, and it is of interest to enquire what takes place in this situation. A result of C. Franchetti and W.A. Light [[#References|[a4]]] shows that if $U + V$ is not closed, then the rate of convergence of the alternating algorithm may be arbitrarily slow.
  
Interesting questions arise when the algorithm is considered not as an algorithm in a Hilbert space, but in a [[Banach space|Banach space]] with some fairly strong assumptions on the norm. The operators <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023050.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023051.png" /> are then of course the operators of [[Best approximation|best approximation]], so one needs at least to guarantee that such operators exist. For example, the ambient space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023052.png" /> can be a strictly convex and reflexive Banach space (cf. also [[Reflexive space|Reflexive space]]; [[Smooth space|Smooth space]]). In fact, W.J. Stiles proved [[#References|[a8]]] that if the dimension of such a space is at least <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023053.png" />, the convergence of the algorithm for all <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023054.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023055.png" /> and for all pairs <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023056.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023057.png" /> of closed subspaces suffices to guarantee that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023058.png" /> is a Hilbert space.
+
Interesting questions arise when the algorithm is considered not as an algorithm in a Hilbert space, but in a [[Banach space|Banach space]] with some fairly strong assumptions on the norm. The operators $P_ i$ and $P$ are then of course the operators of [[Best approximation|best approximation]], so one needs at least to guarantee that such operators exist. For example, the ambient space $X$ can be a strictly convex and reflexive Banach space (cf. also [[Reflexive space|Reflexive space]]; [[Smooth space|Smooth space]]). In fact, W.J. Stiles proved [[#References|[a8]]] that if the dimension of such a space is at least $3$, the convergence of the algorithm for all $f$ in $X$ and for all pairs $U$ and $V$ of closed subspaces suffices to guarantee that $X$ is a Hilbert space.
  
 
Stiles also proved that in a finite-dimensional, smooth and strictly convex space, the alternating algorithm as given in (a1) is always effective. Several authors have given improvements to this results (see [[#References|[a2]]]). The results of Franchetti and Light [[#References|[a3]]] show that the convergence of the algorithm can still be linear in this case, with an appropriate modification to the definition of the angle between subspaces.
 
Stiles also proved that in a finite-dimensional, smooth and strictly convex space, the alternating algorithm as given in (a1) is always effective. Several authors have given improvements to this results (see [[#References|[a2]]]). The results of Franchetti and Light [[#References|[a3]]] show that the convergence of the algorithm can still be linear in this case, with an appropriate modification to the definition of the angle between subspaces.
  
Finally, there are a number of interesting extensions where the projections are not uniquely defined. Instead, one may work in a space <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023059.png" /> with subspaces <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023060.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023061.png" /> which do not guarantee unique best approximations. An example of this is the [[Diliberto–Straus algorithm|Diliberto–Straus algorithm]], where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023062.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023063.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023064.png" />. In this algorithm, which is a realization of (a1), a selection must be employed to define the mappings <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023065.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/a/a130/a130230/a13023066.png" />.
+
Finally, there are a number of interesting extensions where the projections are not uniquely defined. Instead, one may work in a space $X$ with subspaces $U$ and $V$ which do not guarantee unique best approximations. An example of this is the [[Diliberto–Straus algorithm|Diliberto–Straus algorithm]], where $X = C ( S \times T )$, $U = C ( S )$ and $V = C ( T )$. In this algorithm, which is a realization of (a1), a selection must be employed to define the mappings $P$ and $Q$.
  
 
====References====
 
====References====
<table><TR><TD valign="top">[a1]</TD> <TD valign="top">  N. Aronszajn,  "Theory of reproducing kernels"  ''Trans. Amer. Math. Soc.'' , '''68'''  (1950)  pp. 337–404</TD></TR><TR><TD valign="top">[a2]</TD> <TD valign="top">  F. Deutsch,  "The method of alternating projections"  S.P. Singh (ed.) , ''Approximation Theory, Spline Functions and Applications'' , Kluwer Acad. Publ.  (1992)  pp. 105–121</TD></TR><TR><TD valign="top">[a3]</TD> <TD valign="top">  C. Franchetti,  W.A. Light,  "The alternating algorithm in uniformly convex spaces"  ''J. London Math. Soc.'' , '''29''' :  2  (1984)  pp. 454–555</TD></TR><TR><TD valign="top">[a4]</TD> <TD valign="top">  C. Franchetti,  W.A. Light,  "On the von Neumann alternating algorithm in Hilbert Space"  ''J. Math. Anal. Appl.'' , '''114'''  (1986)  pp. 305–314</TD></TR><TR><TD valign="top">[a5]</TD> <TD valign="top">  S. Kaylar,  H.L. Weinert,  "Error bounds for the method of alternating projections"  ''Math. Control Signals Syst.'' , '''1'''  (1988)  pp. 43–59</TD></TR><TR><TD valign="top">[a6]</TD> <TD valign="top">  J. von Neumann,  "Functional operators II. The geometry of orthogonal spaces" , ''Ann. of Math. Stud.'' , '''22''' , Princeton Univ. Press  (1950)</TD></TR><TR><TD valign="top">[a7]</TD> <TD valign="top">  K.T. Smith,  D.C. Solman,  S.L. Wagner,  "Practical and mathematical aspects of the problem of reconstructing objects from radiographs"  ''Bull. Amer. Math. Soc.'' , '''83'''  (1977)  pp. 1227–1270</TD></TR><TR><TD valign="top">[a8]</TD> <TD valign="top">  W.J. Stiles,  "Closest point maps and their products"  ''Nieuw Archief voor Wiskunde'' , '''13''' :  3  (1965)  pp. 19–29</TD></TR></table>
+
<table><tr><td valign="top">[a1]</td> <td valign="top">  N. Aronszajn,  "Theory of reproducing kernels"  ''Trans. Amer. Math. Soc.'' , '''68'''  (1950)  pp. 337–404</td></tr><tr><td valign="top">[a2]</td> <td valign="top">  F. Deutsch,  "The method of alternating projections"  S.P. Singh (ed.) , ''Approximation Theory, Spline Functions and Applications'' , Kluwer Acad. Publ.  (1992)  pp. 105–121</td></tr><tr><td valign="top">[a3]</td> <td valign="top">  C. Franchetti,  W.A. Light,  "The alternating algorithm in uniformly convex spaces"  ''J. London Math. Soc.'' , '''29''' :  2  (1984)  pp. 454–555</td></tr><tr><td valign="top">[a4]</td> <td valign="top">  C. Franchetti,  W.A. Light,  "On the von Neumann alternating algorithm in Hilbert Space"  ''J. Math. Anal. Appl.'' , '''114'''  (1986)  pp. 305–314</td></tr><tr><td valign="top">[a5]</td> <td valign="top">  S. Kaylar,  H.L. Weinert,  "Error bounds for the method of alternating projections"  ''Math. Control Signals Syst.'' , '''1'''  (1988)  pp. 43–59</td></tr><tr><td valign="top">[a6]</td> <td valign="top">  J. von Neumann,  "Functional operators II. The geometry of orthogonal spaces" , ''Ann. of Math. Stud.'' , '''22''' , Princeton Univ. Press  (1950)</td></tr><tr><td valign="top">[a7]</td> <td valign="top">  K.T. Smith,  D.C. Solman,  S.L. Wagner,  "Practical and mathematical aspects of the problem of reconstructing objects from radiographs"  ''Bull. Amer. Math. Soc.'' , '''83'''  (1977)  pp. 1227–1270</td></tr><tr><td valign="top">[a8]</td> <td valign="top">  W.J. Stiles,  "Closest point maps and their products"  ''Nieuw Archief voor Wiskunde'' , '''13''' :  3  (1965)  pp. 19–29</td></tr></table>

Latest revision as of 16:56, 1 July 2020

An algorithm first proposed by J. von Neumann in 1933 [a6]. It gives a method for calculating the orthogonal projection $P _ { U \cap V }$ onto the intersection of two closed subspaces $U$ and $V$ in a Hilbert space $H$ in terms of the orthogonal projections $P : H \rightarrow U$ and $Q : H \rightarrow V$ (cf. also Orthogonal projector). The result is that

\begin{equation*} \operatorname { lim } _ { n \rightarrow \infty } ( P Q ) ^ { n } f = P _ { U \bigcap V }\, f \text { for all } f \in H . \end{equation*}

It can be equivalently formulated as

\begin{equation} \tag{a1} \operatorname { lim } _ { n \rightarrow \infty } ( ( 1 - Q ) ( I - P ) ) ^ { n } f = ( I - P _ { \overline{U + V} } ) f \end{equation}

for all $f \in H$. Here, $P _ {\overline{U+V}}$ is the orthogonal projection of $H$ onto the subspace $U + V$. Since it was first proposed, this algorithm has undergone many generalizations, mainly concerning the kind of spaces in which the algorithm can be located. It occurs in a large number of practical applications, such as domain decomposition methods for solving linear systems of equations, and certain multi-grid schemes used for solving elliptic partial differential equations. For a survey account of a wide collection of applications, see [a2].

The algorithm easily admits a generalization to a finite number of subspaces of $H$. Let $f$ be a member of the Hilbert space $H$, and let $U_i$, $i = 1 , \dots , n$, be closed subspaces of $H$. Let $U = \cap _ { i = 1 } ^ { n } U _ { i }$, and let $P$ be the orthogonal projection of $H$ onto $U$. Let $P _ { i } : H \rightarrow U _ { i }$ be the orthogonal projection onto $U_i$, $i = 1 , \dots , n$. Given $f \in H$, define $\{ f _ { \text{l} } \} _ { \text{l} = 1 } ^ { \infty }$ by $f _ { l } = ( P _ { n } \ldots P _ { 1 } ) ^ { \text{l} } f$, for $\operatorname{l} = 1,2 , \ldots$. The elements $f _ { \operatorname{l} }$ are the iterates in the alternating algorithm and the analogous convergence result is that $\| f _ { \text{l} } - P f \| \rightarrow 0$ as $1 \rightarrow \infty$. Quite a while later, other authors became interested in the rate of convergence of this algorithm. It was verified by N. Aronszain [a1] in the case of only two subspaces that the rate of convergence is usually linear. That is, there is a constant $c < 1$ such that $\| f _ { \operatorname{l} } - P _ { U \cap V } f \| \leq c ^ { 2 \operatorname{l} - 1 } \| f \|$ for all $f$ in $H$. The number $c$ depends on the notion of an angle between two subspaces of a Hilbert space. The angle $\alpha$ between $U$ and $V$ is given by

\begin{equation*} \operatorname { cos } \alpha = \operatorname { sup } \left\{ \begin{array} { r l } {} &{ u \in U \bigcap V ^ { \perp }, } \\ { \langle u , v \rangle : } & { v \in V \cap U ^ { \perp }, } \\{} & { \| u \| , \| v \| \leq 1 } \end{array} \right\}. \end{equation*}

In the present context, $c = \operatorname { cos } \alpha$. In [a7], K.T. Smith, D.C. Solman and S.L. Wagner gave the analogous result for more than two subspaces. The convergence constant $c$ remains a function of the angles between the subspaces, but is only a little more complicated than the case of two subspaces. Because of the importance of the rate of convergence, later authors have given improved rates. To get a very good expression for the rate of convergence, one needs to be very much more careful over the angles specified in the Smith–Solman–Wagner result. The best result know today (2000) is given in [a5].

The Lorch theorem states that the angle between two subspaces $U$ and $V$ of a Hilbert space $H$ is strictly positive if and only if $U + V$ is a closed subspace of $H$. So if the sum is not closed, the result of Aronszain does not provide linear convergence, and it is of interest to enquire what takes place in this situation. A result of C. Franchetti and W.A. Light [a4] shows that if $U + V$ is not closed, then the rate of convergence of the alternating algorithm may be arbitrarily slow.

Interesting questions arise when the algorithm is considered not as an algorithm in a Hilbert space, but in a Banach space with some fairly strong assumptions on the norm. The operators $P_ i$ and $P$ are then of course the operators of best approximation, so one needs at least to guarantee that such operators exist. For example, the ambient space $X$ can be a strictly convex and reflexive Banach space (cf. also Reflexive space; Smooth space). In fact, W.J. Stiles proved [a8] that if the dimension of such a space is at least $3$, the convergence of the algorithm for all $f$ in $X$ and for all pairs $U$ and $V$ of closed subspaces suffices to guarantee that $X$ is a Hilbert space.

Stiles also proved that in a finite-dimensional, smooth and strictly convex space, the alternating algorithm as given in (a1) is always effective. Several authors have given improvements to this results (see [a2]). The results of Franchetti and Light [a3] show that the convergence of the algorithm can still be linear in this case, with an appropriate modification to the definition of the angle between subspaces.

Finally, there are a number of interesting extensions where the projections are not uniquely defined. Instead, one may work in a space $X$ with subspaces $U$ and $V$ which do not guarantee unique best approximations. An example of this is the Diliberto–Straus algorithm, where $X = C ( S \times T )$, $U = C ( S )$ and $V = C ( T )$. In this algorithm, which is a realization of (a1), a selection must be employed to define the mappings $P$ and $Q$.

References

[a1] N. Aronszajn, "Theory of reproducing kernels" Trans. Amer. Math. Soc. , 68 (1950) pp. 337–404
[a2] F. Deutsch, "The method of alternating projections" S.P. Singh (ed.) , Approximation Theory, Spline Functions and Applications , Kluwer Acad. Publ. (1992) pp. 105–121
[a3] C. Franchetti, W.A. Light, "The alternating algorithm in uniformly convex spaces" J. London Math. Soc. , 29 : 2 (1984) pp. 454–555
[a4] C. Franchetti, W.A. Light, "On the von Neumann alternating algorithm in Hilbert Space" J. Math. Anal. Appl. , 114 (1986) pp. 305–314
[a5] S. Kaylar, H.L. Weinert, "Error bounds for the method of alternating projections" Math. Control Signals Syst. , 1 (1988) pp. 43–59
[a6] J. von Neumann, "Functional operators II. The geometry of orthogonal spaces" , Ann. of Math. Stud. , 22 , Princeton Univ. Press (1950)
[a7] K.T. Smith, D.C. Solman, S.L. Wagner, "Practical and mathematical aspects of the problem of reconstructing objects from radiographs" Bull. Amer. Math. Soc. , 83 (1977) pp. 1227–1270
[a8] W.J. Stiles, "Closest point maps and their products" Nieuw Archief voor Wiskunde , 13 : 3 (1965) pp. 19–29
How to Cite This Entry:
Alternating algorithm. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Alternating_algorithm&oldid=50115
This article was adapted from an original article by W.A. Light (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article