Adaptive quadrature
It is a common task to compute an approximate value of an integral, the simplest case being
$$I(f)=\int\limits_a^bf(x)\,dx.$$
If the integrand $f$ is given by a Fortran-like expression, then one can use symbolic methods, interval-methods or numerical methods. If the integrand $f$ is given by function values, only numerical methods of the form
$$I_n(f)=\phi_n(f(x_1),\dots,f(x_n))\tag{a1}$$
can be used. A method \ref{a1} is called non-adaptive if $n$ and the knots $x_k$ are fixed in advance and do not depend on $f$. One might hope that it is possible to learn about $f$ during the computation of $f(x_1),\dots,f(x_{k-1})$ in such a way that one can choose the next knot $x_k$ suitably to reduce the error. Therefore, one studies adaptive methods, where the number $n$ depends on $f$ and the choice of $x_k$ may depend on the already computed function values. In mathematical statistics, adaptive information is known as sequential design and non-adaptive information as non-sequential design.
The use of adaptive algorithms is widespread. Often these methods are based on a way to estimate the error in different subintervals of $[a,b]$. One then adds new knots in those intervals where a high error is expected. See [a1], [a2], [a4], [a6] for detailed information concerning these strategies.
A non-adaptive method provides an immediate decomposition for parallel computation. If adaptive information is superior to non-adaptive information, then an analysis of the trade-off between using adaptive or non-adaptive information on a parallel computer should be carried out.
To prove any bounds for the error $|I(f)-I_n(f)|$, one must have some a priori knowledge concerning $f$, such as $f\in F$, where $F$ is a class of integrable functions. Without such assumptions on $f$, the error of any method \ref{a1} can be arbitrarily large. Much is known concerning the power of adaption; [a5] contains known results as well as new results of the authors. See also the survey [a3]. One can study the worst case as well as the average case and errors for randomized (Monte-Carlo) methods (cf. also Monte-Carlo method). It may happen that adaption does not help in the worst-case setting but helps significantly with respect to other settings.
Under certain assumptions, adaptive methods are not better than non-adaptive ones. N.S. Bakhvalov (1971) proved, see [a3], [a5], that adaption does not help in the worst-case setting if $F$ is a symmetric and convex set of functions. For some other natural classes adaption helps significantly. In particular, adaption significantly helps for certain classes of functions with singularities. The power of adaption depends critically on a priori knowledge concerning the problem being studied; even a seemingly small change in the assumptions can lead to a different answer.
References
[a1] | P.J. Davis, P. Rabinowitz, "Methods of numerical integration" , Acad. Press (1984) (Edition: Second) MR0760629 Zbl 0537.65020 |
[a2] | A.R. Krommer, C.W. Ueberhuber, "Numerical integration on advanced computer systems" , Lecture Notes in Computer Science , 848 , Springer (1994) MR1324342 Zbl 0825.65012 Zbl 0903.65019 |
[a3] | E. Novak, "On the power of adaption" J. Complexity , 12 (1196) pp. 199–237 MR1408328 Zbl 0870.65042 |
[a4] | R. Piessens, E. de Doncker-Kapenga, C.W. Überhuber, D.K. Kahaner, "Quadpack" , Springer (1983) MR0712135 Zbl 0508.65005 |
[a5] | J.F. Traub, G.W. Wasilkowski, H. Woźniakowski, "Information-based complexity" , Acad. Press (1988) MR0958691 Zbl 0654.94004 |
[a6] | D. Zwillinger, "Handbook of integration" , Jones and Bartlett (1992) MR1174295 Zbl 0758.65017 |
Adaptive quadrature. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Adaptive_quadrature&oldid=44657