# Information-based complexity

IBC

Computational complexity studies the intrinsic difficulty of solving mathematically posed problems. Discrete computational complexity studies discrete problems and often uses the Turing machine model of computation. Continuous computational complexity studies continuous problems and tends to use the real number model; see [a1], [a3] as well as Optimization of computational algorithms.

Information-based complexity is part of continuous computational complexity. Typically, it studies infinite-dimensional problems for which the input is an element of an infinite-dimensional space. Examples of such problems include multivariate integration or approximation, solution of ordinary or partial differential equations, integral equations, optimization, and solution of non-linear equations. The input of such problems is usually a multivariate function on the real numbers, represented by an oracle (subroutine) which computes, for example, the function value at a given point. This oracle can be used only a finite number of times. That is why only partial information is available when solving an infinite-dimensional problem on a digital computer. This partial information is often contaminated with errors such as round-off errors, measurement errors, or human errors. Thus, the available information is partial and/or contaminated. Therefore, the original problem can be solved only approximately. The goal of information-based complexity is to compute such an approximation at minimal cost. The error and the cost of approximation can be defined in different settings including worst case, average case (see also Bayesian numerical analysis), probabilistic, randomized (see also Monte-Carlo method; Stochastic numerical algorithm) and mixed settings.

The -complexity is defined as the minimal cost of computing an approximation with error at most . For many problems, sharp complexity bounds are obtained by studying only the information cost.

Information-based complexity is a branch of computational complexity and is formulated as an abstract theory with applications in many areas. Since continuous computational problems are of interest in many fields, information-based complexity has common interests and has greatly benefited from these fields. Questions, concepts, and results from approximation theory, numerical analysis, applied mathematics, statistics, complexity theory, algorithmic analysis, number theory, analysis and measure theory have all been influential. See also [a2], [a4], [a5], [a6], [a8].

Recent emphasis in information-based complexity is on the study of complexity of linear multivariate problems involving functions of $d$ variables with large or even infinite $d$. For example, for path integration one has $d = + \infty$, while the approximation of path integrals yields multivariate integration with huge $d$, see [a7].

Linear multivariate problems are studied in various settings. The main question is to check which linear multivariate problems are tractable, i.e., their complexity is polynomial in $d$ and in $\epsilon ^ { - 1 }$. The Smolyak algorithm, or a modification of it, is often near optimal for general multivariate problems defined over tensor product spaces. Some of such problems are tractable even in the worst-case setting.

In the worst-case setting, many multivariate problems defined over isotropic spaces are intractable. This means that they suffer the curse of dimension, since their complexity is an exponential function of $d$. The curse of dimension can be sometimes broken by switching to another setting, such as an average case or randomized setting, or by shrinking the class of input functions.

How to Cite This Entry:
Information-based complexity. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Information-based_complexity&oldid=50736
This article was adapted from an original article by Henryk WoÅºniakowski (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article