Namespaces
Variants
Actions

Approximation of functions

From Encyclopedia of Mathematics
Revision as of 17:17, 7 February 2011 by 127.0.0.1 (talk) (Importing text file)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The replacement, according to a definite rule, of a function by a function close to it in some sense and belonging to a set (the approximating set) that is prescribed in advance. It is assumed that is defined on a set in -dimensional Euclidean space (the real axis being a particular case) in which the approximation is carried out; can either be given explicitly in terms of elementary functions or can be the solution of some equation. If only incomplete information about is available, the approximation problem is in fact a matter of approximating the whole class of functions defined by the information available.

In practice, the necessity to approximate functions arises in various situations when it is needed to replace a function by a smoother function or by one which is simpler and more convenient in computations, or when it is required to establish a functional dependence on the basis of experimental data, etc.

In the general problem of the approximation of functions it is usually possible to single out the following more particular problems: the choice of the approximating set ; the choice of the measure of the error of approximation; the choice of the method of approximation, e.g. the rule according to which a function in is made to correspond to ; and the investigation and estimation of the error of the approximation.

With respect to the choice of the approximating set , apart from the necessity of ensuring the required accuracy of the approximation, one is guided by the aim to deal with structurally simple functions that are convenient in computations and that can be submitted to a priori specified conditions, e.g. related to smoothness.

The classical tools for approximation consist of algebraic polynomials (if is a bounded closed set) and trigonometric polynomials (in the periodic case) in one or more variables. Their widespread application as approximating sets is due, in particular, to the fact that it is in principle possible to approximate any continuous function by means of algebraic of trigonometric polynomials with any degree of accuracy specified in advance. The accuracy of the approximation can be increased at the expense of increasing the degree of the polynomial; however, this complicates the approximating process and increases the difficulties of applying it. In practice one takes subspaces of algebraic or trigonometric polynomials of a fixed degree as approximating sets and one aims at obtaining the accuracy required while keeping the degrees of the polynomials used as low as possible. A more general, and at the same time more flexible, approximation tool is obtained by considering generalized polynomials

where is a system of linearly independent functions which should be chosen such that the conditions of the problem at hand and the a priori conditions on are taking into account.

In many problems splines (cf. Spline) have been found to be more natural and more convenient from the viewpoint of computation than classical polynomials. If

(1)

is a fixed subdivision of the interval , then a (polynomial) spline of degree and defect () with respect to is a function composed of algebraic polynomials of degree "tied together" at the points in such a way that on the whole interval the spline is continuous together with its derivatives up to and including order . Thus and on each interval () is an algebraic polynomial of degree at most . For instance, a polygonal line with corners at the nodes (knots, joints) is a spline of degree one and defect one; a function that is continuously differentiable on and that coincides on , , with a cubic polynomial is a cubic spline of defect 2, etc. Splines in two or more variables are similarly defined. Having finite smoothness properties, splines have a greater local flexibility than polynomials; a change in the values of a spline in an interval does not much affect (if at all) its behaviour outside . Apart from the simplicity of their implementation in computers, the advantages of splines show up, in particular, when the information about the function to be approximated has a discrete character, for instance, when the values of and, possibly, of some of its derivatives, are given at some points.

If has singularities or if the approximation is carried out on an unbounded domain, a convenient tool for approximation consists of rational fractions , where and are algebraic polynomials (rational approximation, Padé approximation). Non-periodic functions defined on the whole real axis can also be approximated by entire functions of exponential type.

The measure of the error of approximation, , is usually chosen by taking into account the condition of the concrete problem and the information available about the function to be approximated. Most frequently it is a matter of choosing a function space containing in whose metric it is expedient to estimate the error of approximation. If

then one deals with uniform, or Chebyshev, approximation, while if

then one has mean-power approximation, or -approximation, which in the case of is called approximation in the mean. A particular important case is that of , mean-square approximation: the error of the best approximation of the function by a finite-dimensional subspace can be expressed exactly in terms of certain determinants. In some problems the requirements of proximity (closeness) of the functions and are different at different points; in order to take into account this lack of homogeneity a weight function is introduced and weighted approximation is considered, where the measure of the error of approximation is given by

or

The weight function allows one also to see to it that the error is finite when, for instance, is unbounded. If the error of approximation has to take into account the closeness of and only at some individual points , , in , then as one may take one of the quantities

or

into which weight coefficients can also be introduced.

When deciding according to which rule the approximating function should be chosen from the set (when chosing the method of approximation) it is evident that one should strive for the highest possible accuracy of approximation and at the same time for simplicity in the construction of , taking account of the information available about . The first requirement leads to a function in that is "nearest" to , i.e. is such that

Here immediately questions arise about the existence and the uniqueness of such a function (the function of best approximation), and also about its characteristic properties (see [5]). Existence is ensured if is a closed locally compact set, and in particular if is a finite-dimensional subspace. Uniqueness depends both on the properties of the approximating set (see Haar condition; Chebyshev system) and on the metric which defines . There is a number of well-known necessary and sufficient conditions which should be satisfied by the function of best approximation in a variety of cases (see Polynomial of best approximation; Chebyshev theorem; Markov criterion). However, as a rule, these criteria do not yield effective algorithms to construct . Hence it is of great importance to have methods available which, on the basis of the information about the function to be approximated, allow one to construct effectively some function in yielding an acceptable approximation. Here, in the first place, one ought to mention linear methods (these satisfy ), to which, in particular, interpolation methods belong: having prescribed the points in , one chooses from those functions in that satisfy the interpolation condition

(2)

If is a linear manifold and if it contains a system of functions such that and when , then the function

belongs to and satisfies (2); it yields an interpolation method of approximation and is obviously linear. It may be required to choose the function in such a way that at the points it coincides not only with , but also with some its derivatives; this case is called interpolation with multiple nodes. If and , then there is a unique algebraic polynomial of degree , and in the periodic case (, ) — a unique trigonometric polynomial of degree , coinciding with at the points . Multiple interpolation is achieved with Hermite polynomials, a particular case of which is the Taylor polynomial; in this case the values of a function and of its first consecutive derivatives at a point are interpolated by an algebraic polynomial of degree .

Interpolation with splines has its peculiarities connected with the choice of the interpolation nodes and with the boundary conditions ensuring the existence and uniqueness of the interpolating spline. For instance, a spline of degree and defect 1 with respect to the subdivision (1) taking prescribed values at different points of the interval such that , , exists and is unique if the boundary conditions are given in terms of conditions , , and conditions , , with . Functions can be put in one-to-one correspondence with splines of degree and defect , , with respect to the subdivision (1) by requiring that , ; , where, if , one also has to impose some boundary conditions. When , this spline is called a Hermite or local spline, because the behaviour of the spline in an interval is determined by the values of the function and of its derivatives , , at the points and .

In the approximation of functions an important role is also played by linear methods constructed on the basis of an expansion of the function in a Fourier series with respect to an orthogonal system. In particular, in the periodic case a widely used approximation tool consists of Fourier sums, and their various means (the case of the trigonometric orthogonal system) (see Approximation of functions, linear methods).

The investigation and estimation of the error of approximation is important from the practical point of view and at the same time is the branch of the approximation of functions that is richest in ideas. More precisely, the development of methods dealing with estimating the error, the investigation of this dependence on the smoothness of the approximated function, as well as the study and comparison of the approximation properties of various approximation methods, have led to the creation of the approximation theory of functions, one of the most rapidly developing branches of mathematical analysis.

The foundation of the approximation theory of functions was laid in the papers of P.L. Chebyshev in 1854–1859 (see [1]) on best uniform approximation of continuous functions by polynomials and rational fractions and in the work of K. Weierstrass , who proved in 1885 that with any continuous function on an interval , or on the whole real axis and having period , there can be associated a sequence of algebraic (respectively, trigonometric) polynomials of degree such that, as ,

where is , respectively the whole real axis. Similar assertions hold in the case when the measure of the error of approximation is defined by an integral metric, as well as in the case of functions of several variables. Particular importance has been given to the investigation of the speed with which the sequence decreases in dependence on the properties of the approximated function and the choice of the approximating polynomials . The most attention has been paid to the study of best approximation problems, and also to that of linear methods of approximation with the property that for a given the polynomial can be constructed effectively. An important stage in the development of the approximation theory of functions is connected with the names of Ch.J. de la Vallée-Poussin, D. Jackson, and S.N. Bernstein [S.N. Bernshtein], who initiated the investigation of the relation between the speed of decrease of the approximation when approximating the function by means of suitably chosen polynomials of degree (as ), and the difference-differential properties of . It turns out that in many cases these properties of , e.g., having derivatives, smoothness of those derivatives, etc., can be characterized by the sequence of approximating polynomials and by the behaviour of the corresponding error of approximation (see Approximation of functions, direct and inverse theorems). This yields a new, constructive characterization of continuous and differentiable functions. In the first third of the 20th century this kind of problem was dominating approximation theory; for this reason this field was also thought of as the constructive theory of functions.

In the 1930s and 1940s there appeared papers by A.N. Kolmogorov, J. Favard and S.M. Nikol'skii which initiated a new direction of research connected with the approximation of classes of functions by finite-dimensional subspaces and with obtaining accurate estimates of the error, using difference-differential properties defining the class. The objective is to obtain the quantities

where is the chosen measure of the approximation error, is a class of functions and is the approximating (generalized) polynomial whose coefficients are determined by the method of approximation. Results of this kind make it possible to compare approximation methods from the point of view of their approximating capabilities and to formulate the problem, important in applications, of determining for a given class of functions the optimal (best) approximating set (of fixed dimension ). Research in this direction, based both on the properties of specific approximation methods and on advanced results of functional analysis, turned out to be also very fertile in ideas, because, by proceeding in this way, new facts about the mutual connections between many extremal problems of various types were discovered, making it possible to obtain deep and delicate relations in the theory of functions. Owing to that it was possible to conclusively solve a number of extremal problems concerning the best approximation of important classes of functions (see [5] and [7], and also Approximation of functions, extremal problems in function classes).

Some other aspects of the approximation of functions.

Additional restrictions can be put on the approximating function in . If they are not connected with the function (for instance, restrictions on, or relations between, the coefficients of the approximating polynomial), then it is really a matter of determining more precise. A new situation arises if the restrictions on are connected with the approximated function ; one of the interesting cases is that of one-sided approximation. Here in is required to satisfy the inequality (or ) and the error is estimated in an integral metric (see [19]).

In applied problems, besides the case of explicity given functions, one may have to approximate curves and surfaces which are given only in parametric form; as tools of approximation one can use, for instance, parametric splines. It is most natural to measure the error of approximation by means of the Hausdorff distance, which properly takes into account the geometric proximity of such objects. For instance, for two curves and , the Hausdorff distance is defined by

where is the Euclidean (or any other) distance between the points and . Here the Hausdorff distance is preferable when choosing the measure of the error of approximation; this is also the case in some cases of the approximation of functions, for instance when a discontinuous function has to be approximated by a smooth function (see [16]).

The solution of a number of problems in the theory of approximation of functions is closely connected with the study of extremal properties of polynomials in one or another concrete system of functions (inequalities for the derivatives of polynomials, polynomials least deviating from zero, etc.). In particular, the proofs of inverse theorems in the theory of approximation of functions strongly rely on inequalities giving estimates of the norm (or value at a fixed point) of some derivatives of an algebraic or trigonometric polynomial, using certain characteristics of the polynomial itself. In this direction a number of precise results is known, and these have generalizations to entire functions (see [6][10]).

The problem concerning an algebraic polynomial (with prescribed leading coefficient) least deviating from zero in the metric of or on the interval , and the equivalent problem concerning the best approximation of the function by a polynomial of degree at most , was investigated by Chebyshev (for ) and by his students (for ). The solution is given by the Chebyshev polynomials of the first kind and the second kind and by the Legendre polynomials , which have wide applications both in theoretical and applied investigations. A number of results is known for the more general case when all coefficients of the polynomial are subjected to some restrictions (see [6]). The problem concerning a mono-spline of minimal norm, equivalent to that of finding the best approximation of the function by splines of degree with free knots, is of special importance in view of the fact that in a number of cases the problem of obtaining the best quadrature formula reduces to it.

For the approximation of functions in the complex plane see Approximation of functions of a complex variable.

References

[1] P.L. Chebyshev, "Questions on smallest quantities connected with the approximate representation of functions (1859)" , Collected works , 2 , Moscow-Leningrad (1947) pp. 151–235 (In Russian)
[2a] K. Weierstrass, "Ueber die analytische Darstellbarkeit sogenannter willkürlicher Funktionen einer reellen Veränderlichen" Sitzungsberichte Akad. Berlin (1885) pp. 633–639
[2b] K. Weierstrass, "Ueber die analytische Darstellbarkeit sogenannter willkürlicher Funktionen einer reellen Veränderlichen" Sitzungsberichte Akad. Berlin (1885) pp. 789–805
[3] V.L. Goncharov, "The theory of interpolation and approximation of functions" , Moscow (1954) (In Russian)
[4] I.P. Natanson, "Constructive function theory" , 1–3 , F. Ungar (1964–1965) (Translated from Russian)
[5] I.P. Korneichuk, "Extremal problems in approximation theory" , Moscow (1976) (In Russian)
[6] V.K. Dzyadyk, "Introduction to the theory of uniform approximation of functions by polynomials" , Moscow (1977) (In Russian)
[7] V.M. Tikhomirov, "Some problems in approximation theory" , Moscow (1976) (In Russian)
[8] S.M. Nikol'skii, "Approximation of functions of several variables and imbedding theorems" , Springer (1975) (Translated from Russian)
[9] N.I. [N.I. Akhiezer] Achiezer, "Theory of approximation" , F. Ungar (1956) (Translated from Russian)
[10] A.F. Timan, "Theory of approximation of functions of a real variable" , Pergamon (1963) (Translated from Russian)
[11] P.P. Korovkin, "Linear operators and approximation theory" , Hindushtan Publ. Comp. (1960) (Translated from Russian)
[12] J.H. Ahlberg, E.N. Nilson, J.F. Walsh, "The theory of splines and their applications" , Acad. Press (1967)
[13] S.B. Stechkin, Yu.N. Subbotin, "Splines in numerical mathematics" , Moscow (1976) (In Russian)
[14] P.J. Laurent, "Approximation et optimisation" , Hermann (1972)
[15] L. Collatz, W. Krabs, "Approximationstheorie: Tschebyscheffsche Approximationen mit Anwendungen" , Teubner (1973)
[16] B. Sendov, "Hausdorff approximations" , Sofia (1979)
[17] S.M. Nikol'skii, "Quadrature formulas" , Moscow (1979) (In Russian)
[18] Yu.S. Zav'yadov, B.I. Kvasov, V.I. Miroshnichenko, "Methods of spline functions" , Moscow (1980)
[19] N.P. Korneichuk, A.A. Ligun, V.G. Doronin, "Approximation with constraints" , Kiev (1982) (In Russian)
[20] P.J. Davis, "Interpolation and approximation" , Dover, reprint (1975)
[21] G.G. Lorentz, "Approximation of functions" , Holt, Rinehart & Winston (1966)
[22] N.P. Korneichuk, "Splines in approximation theory" , Moscow (1984) (In Russian)


Comments

References

[a1] J.A. Todd, "Applications of transformation theory; a legacy from Zolotarev (1847–1878)" S.P. Singh (ed.) J.H.W. Burry (ed.) B. Watson (ed.) , Approximation theory and spline functions , Reidel (1984) pp. 207–245
[a2] G.P. Szegö, "The contributions of L. Fejér to the constructive function theory" G. Alexits (ed.) S.B. Stechkin (ed.) , Proc. Conf. Constructive Theory of Functions , Akad. Kindó (1972) pp. 19–26
[a3] I.I. Ibragimov, "On S.N. Bernstein's publications in the constructive theory of functions" G. Alexits (ed.) S.B. Stechkin (ed.) , Proc. Conf. Constructive Theory of Functions , Akad. Kindó (1972) pp. 27–40 (In Russian)
[a4] C.A. Michelli (ed.) D.V. Pai (ed.) B.V. Limaye (ed.) , Methods of functional analysis in approximation theory , Birkhäuser (1986)
[a5] E.W. Cheney, "Introduction to approximation theory" , Chelsea, reprint (1982) pp. 203ff
[a6] L.L. Schumaker, "Spline functions, basic theory" , Wiley (1981)
[a7] T.J. Rivlin, "The Chebyshev polynomials" , Wiley (1974)
[a8] R.A. DeVore, "Freud's work in constructive function theory" J. Approximation theory , 46 (1986) pp. 32–37
[a9] P. Turán, "On some open problems of approximation theory" J. Approximation theory , 29 (1980) pp. 23–85 Mat. Lapok , 25 (1974) pp. 21–75
[a10] D. Braess, "Nonlinear approximation theory" , Springer (1986)
How to Cite This Entry:
Approximation of functions. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Approximation_of_functions&oldid=16535
This article was adapted from an original article by N.P. Korneichuk (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article