Difference between revisions of "Functional equation, methods of solution of a"
(Importing text file) |
(latex details) |
||
(One intermediate revision by one other user not shown) | |||
Line 1: | Line 1: | ||
+ | <!-- | ||
+ | f0420701.png | ||
+ | $#A+1 = 201 n = 0 | ||
+ | $#C+1 = 201 : ~/encyclopedia/old_files/data/F042/F.0402070 Functional equation, methods of solution of a | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
+ | |||
+ | {{TEX|auto}} | ||
+ | {{TEX|done}} | ||
+ | |||
Methods for finding exact or approximate solutions of specific or abstract functional equations, that is, equations of the form | Methods for finding exact or approximate solutions of specific or abstract functional equations, that is, equations of the form | ||
− | + | $$ \tag{1 } | |
+ | P ( x) = y | ||
+ | $$ | ||
− | where | + | where $ P ( x) $ |
+ | is a certain, generally speaking non-linear, operator transforming the elements of a $ B $- | ||
+ | space $ X $( | ||
+ | or a space of some other type) into elements of a space $ Y $ | ||
+ | of the same type (see [[Functional equation|Functional equation]]). Exact solutions in the form of analytic expressions have been obtained only for a few types of functional equations, therefore approximate methods of solution are especially valuable. | ||
A number of methods have evolved to find solutions of general functional equations of the form (1). For example, the method of infinite power series, the method of successive approximations, the [[Galerkin method|Galerkin method]] (the method of moments), the method of tangent hyperbolas, Chebyshev's method (of tangent parabolas), the Newton–Kantorovich method and its modifications, the method of steepest descent (cf. [[Steepest descent, method of|Steepest descent, method of]]), etc., as well as the method of variation of the parameter (direct, iterative and combined, cf. [[Parameter, method of variation of the|Parameter, method of variation of the]]) of particular types and its various modifications, including successive approximations to the inverse operator. General methods have been applied to solve various specific functional equations in mathematical analysis. In addition there are special methods for solving specific functional equations, including numerical methods, for example, the [[Grid method|grid method]], etc. The method of variation of the parameter, the Newton–Kantorovich method and certain other methods indicated also have theoretical significance, since they can be used to draw conclusions about the existence, uniqueness and the domain of location of a solution of a functional equation without finding the solution itself, which is sometimes no less important then the actual value of the solution. The direct method of variation of the parameter is considered below as an example. | A number of methods have evolved to find solutions of general functional equations of the form (1). For example, the method of infinite power series, the method of successive approximations, the [[Galerkin method|Galerkin method]] (the method of moments), the method of tangent hyperbolas, Chebyshev's method (of tangent parabolas), the Newton–Kantorovich method and its modifications, the method of steepest descent (cf. [[Steepest descent, method of|Steepest descent, method of]]), etc., as well as the method of variation of the parameter (direct, iterative and combined, cf. [[Parameter, method of variation of the|Parameter, method of variation of the]]) of particular types and its various modifications, including successive approximations to the inverse operator. General methods have been applied to solve various specific functional equations in mathematical analysis. In addition there are special methods for solving specific functional equations, including numerical methods, for example, the [[Grid method|grid method]], etc. The method of variation of the parameter, the Newton–Kantorovich method and certain other methods indicated also have theoretical significance, since they can be used to draw conclusions about the existence, uniqueness and the domain of location of a solution of a functional equation without finding the solution itself, which is sometimes no less important then the actual value of the solution. The direct method of variation of the parameter is considered below as an example. | ||
− | In the Banach space | + | In the Banach space $ [ X] = [ X \rightarrow X] $ |
+ | of linear bounded operators, let a non-linear operator ordinary differential equation be given on the infinite interval: | ||
+ | |||
+ | $$ \tag{2 } | ||
+ | |||
+ | \frac{dx }{d \lambda } | ||
+ | = \ | ||
+ | x ( I - Ax) \ \ | ||
+ | (= ( I - xA) x),\ \ | ||
+ | x ( 0) = x _ {0} , | ||
+ | $$ | ||
− | + | where $ A, x _ {0} , I \in [ X] $( | |
+ | $ A $, | ||
+ | $ x _ {0} $ | ||
+ | are given, $ I $ | ||
+ | is the identity operator), and $ x ( \lambda ) $ | ||
+ | is an abstract function with values in $ [ X] $( | ||
+ | $ X $ | ||
+ | is a $ B $- | ||
+ | space). The problem of constructing the inverse operator $ A ^ {-1} $, | ||
+ | for an invertible operator $ A $, | ||
+ | the solution of a linear operator equation of the form $ I - Ax = 0 $, | ||
+ | etc., are all reduced to solving equation (2) by the direct method of variation of the parameter. If the spectrum of the operator $ Ax _ {0} $( | ||
+ | respectively, $ x _ {0} A $) | ||
+ | is situated in the right half-plane, then problem (2) has a unique solution $ x ( \lambda ) $, | ||
+ | $ x ( 0) = x _ {0} $, | ||
+ | $ \lambda \in [ 0, \infty ) $. | ||
− | + | An analytic application to the Cauchy problem (2) of methods for the numerical integration of ordinary differential equations leads to a whole class of direct methods of variation of the parameter for solving (2), and consequently also to the solutions of problems that reduce to (2). For example, the [[Euler method|Euler method]] with irregular step $ h _ {k} $ | |
+ | leads to the following method for constructing $ A ^ {-1} $: | ||
− | + | $$ \tag{3 } | |
+ | x _ {k + 1 } = \ | ||
+ | x _ {k} + h _ {k} x _ {k} ( I - Ax _ {k} ) \ \ | ||
+ | ( = x _ {k} + h _ {k} ( I - x _ {k} A) x _ {k} ), | ||
+ | $$ | ||
− | + | $$ | |
+ | k = 0, 1 , . . . . | ||
+ | $$ | ||
− | + | The steps $ h _ {k} $ | |
+ | in (3) are chosen in various ways. When the spectrum of $ I - Ax _ {0} $( | ||
+ | respectively, $ I - x _ {0} A $) | ||
+ | is on the real axis in an interval $ [- \rho _ {0} , 0] $, | ||
+ | $ 0 < \rho _ {0} < \infty $, | ||
+ | then the following method for choosing $ h _ {k} $ | ||
+ | is very effective: | ||
− | + | $$ \tag{4 } | |
+ | h _ {k} = \ | ||
+ | { | ||
+ | \frac{1}{1 + \rho _ {k} } | ||
+ | } ,\ \ | ||
+ | \rho _ {k + 1 } = \ | ||
+ | \rho _ {k} L _ {k} ,\ \ | ||
+ | 4L _ {k} = \ | ||
− | + | \frac{\rho _ {k} }{1 + \rho _ {k} } | |
+ | , | ||
+ | $$ | ||
− | + | $$ | |
+ | k = 0, 1 \dots \ h _ {k + 1 } = | ||
+ | \frac{4h _ {k} }{( 1 + h _ {k} ) ^ {2} } | ||
+ | . | ||
+ | $$ | ||
− | Here, the rate of convergence in (3) is of order higher than | + | Here, the rate of convergence in (3) is of order higher than $ 2 ^ {k} $, |
+ | and the norm of the discrepancy decreases according to the law $ \rho _ {k + 1 } = \rho _ {k} ^ {2} /4 ( 1 + \rho _ {k} ) $. | ||
+ | The case of an interval $ [ 0, \overline \rho \; _ {0} ] $, | ||
+ | $ 0 < \overline \rho \; _ {0} < 1 $, | ||
+ | can be reduced, with advantage, to the one-step method (3) considered, where | ||
− | + | $$ | |
+ | h = \ | ||
+ | { | ||
+ | \frac{1}{1 - \overline \rho \; _ {0} } | ||
+ | } ,\ \ | ||
+ | \rho _ {0} = \ | ||
+ | |||
+ | \frac{\overline \rho \; {} _ {0} ^ {2} }{4 ( 1 - \overline \rho \; _ {0} ) } | ||
+ | . | ||
+ | $$ | ||
Regarding the convergence of (3), (4), there are some results, stated for specific operator equations, based on general facts and depending on which space is taken as a base. | Regarding the convergence of (3), (4), there are some results, stated for specific operator equations, based on general facts and depending on which space is taken as a base. | ||
Line 33: | Line 115: | ||
The problem of constructing projections of the form | The problem of constructing projections of the form | ||
− | + | $$ | |
+ | \left . P ( Ax _ {0} ) \ \ | ||
+ | ( \textrm{ or } P ( x _ {0} A)),\ \ | ||
+ | P ( K) = \ | ||
+ | e ^ {- K \lambda } | ||
+ | \right | _ {\lambda = \infty } ,\ \ | ||
+ | K \in [ X], | ||
+ | $$ | ||
+ | |||
+ | also reduces to the solution of problem (2) (the case when the spectrum of $ I - Ax _ {0} $( | ||
+ | respectively, $ I - x _ {0} A $) | ||
+ | is in $ [- \rho _ {0} , 1] $) | ||
+ | and so does the problem of constructing pseudo-inverse operators $ x ^ {+} $( | ||
+ | respectively, $ {} ^ {+} x $) | ||
+ | for the operator $ A $ | ||
+ | such that | ||
+ | |||
+ | $$ | ||
+ | I - Ax ^ {+} = \ | ||
+ | P ( Ax _ {0} ),\ \ | ||
+ | I - {} ^ {+} xA = \ | ||
+ | P ( x _ {0} A),\ \ | ||
+ | x ^ {+} = {} ^ {+} x. | ||
+ | $$ | ||
− | + | For a direct construction of the projections $ P ( Ax _ {0} ) $( | |
+ | respectively, $ P ( x _ {0} A) $), | ||
+ | the method (3), (4) can be rewritten in the form | ||
− | + | $$ | |
+ | P _ {k + 1 } = \ | ||
+ | P _ {k} + h _ {k} P _ {k} ( P _ {k} - I),\ \ | ||
+ | k = 0, 1 \dots | ||
+ | $$ | ||
− | + | $$ | |
+ | P _ {0} = I - Ax _ {0} \ (= I - x _ {0} A). | ||
+ | $$ | ||
− | + | Depending on the location of the spectrum and the properties of $ A $ | |
+ | one can take, for example, the operators $ I $, | ||
+ | $ A $, | ||
+ | $ A ^ {*} $, | ||
+ | etc., in place of $ x _ {0} $. | ||
− | + | Using the direct method of variation of the parameter one can reduce to the abstract linear functional ordinary differential equation $ ( 0 \leq \lambda \leq \infty ) $ | |
− | + | $$ \tag{5 } | |
− | + | \frac{dy }{d \lambda } | |
+ | = \ | ||
+ | x _ {0} ( b - Ay),\ \ | ||
+ | y ( 0) = y _ {0} , | ||
+ | $$ | ||
− | + | where $ b, y _ {0} \in X $, | |
+ | $ x _ {0} , A \in [ X] $ | ||
+ | and $ y ( \lambda ) $ | ||
+ | is an abstract function with values in $ X $, | ||
+ | the problem of constructing directly the pseudo-solutions $ y ^ {+} $ | ||
+ | of a linear functional equation of the form | ||
− | + | $$ \tag{6 } | |
+ | b - Ay ^ {+} = \ | ||
+ | P ( Ax _ {0} ) | ||
+ | ( b - Ay _ {0} ) | ||
+ | $$ | ||
− | + | or of the functional equations $ b - Ay ^ {+} = 0 $ | |
+ | if $ P ( Ax _ {0} ) ( b - Ay _ {0} ) = 0 $. | ||
+ | Other problems also reduce to (5), including problems for ordinary and partial differential equations, integral equations, etc. The formula | ||
− | + | $$ | |
+ | y ( \lambda ) = \ | ||
+ | e ^ {- x _ {0} A \lambda } | ||
+ | y _ {0} + r ( \lambda ) | ||
+ | x _ {0} b,\ \ | ||
+ | r ( \lambda ) = \ | ||
+ | \int\limits _ { 0 } ^ \lambda | ||
+ | e ^ {- x _ {0} A ( \lambda - s) } ds, | ||
+ | $$ | ||
− | + | gives the unique solution to equation (5) satisfying $ y ( 0) = y _ {0} $. | |
− | + | For example, applying Euler's method with forward step $ h _ {k + 1 } $ | |
+ | to problem (5) leads to the following method for constructing pseudo-solutions $ y ^ {+} $ | ||
+ | to (6): | ||
− | + | $$ \tag{7 } | |
+ | y _ {k + 1 } = \ | ||
+ | y _ {k} + h _ {k + 1 } | ||
+ | x _ {0} ( b - Ay _ {k} ),\ \ | ||
+ | k = 0, 1 , . . . . | ||
+ | $$ | ||
− | + | The steps $ h _ {k + 1 } $ | |
+ | are chosen, for example, in terms of the roots of the [[Chebyshev polynomials|Chebyshev polynomials]] $ T _ {N} ( t) $: | ||
− | + | $$ \tag{8 } | |
+ | h _ {j} = \ | ||
+ | { | ||
+ | \frac{2}{[ M + m - ( M - m) t _ {j} ] } | ||
+ | } ,\ \ | ||
+ | t _ {j} = \cos \left [ | ||
+ | { | ||
+ | \frac{( 2 \kappa _ {j} - 1) \pi }{2N} | ||
+ | } \right ] , | ||
+ | $$ | ||
− | + | $$ | |
+ | j = 1 \dots N, | ||
+ | $$ | ||
− | + | when $ x _ {0} A $ | |
+ | is a [[Self-adjoint operator|self-adjoint operator]] with non-zero part of the spectrum in $ [ m, M] $, | ||
+ | $ 0 < m \leq M $. | ||
+ | Here, $ P _ {N} = ( \kappa _ {1} \dots \kappa _ {N} ) $ | ||
+ | is a certain permutation of $ 1 \dots N $ | ||
+ | in order that the computation be stable and $ N $ | ||
+ | is the degree of the required polynomial. The method (7), (8) gives an optimum estimate of convergence only at the $ N $- | ||
+ | th step. When $ N = 2 ^ {i} $( | ||
+ | or $ \approx 2 ^ {i} $) | ||
+ | the choice of steps in (7) is very effective if they depend successively on the roots of the Chebyshev polynomials | ||
− | + | $$ | |
+ | T _ {2} ( t) - T _ {0} ( t),\ | ||
+ | T _ {1} ( t), T _ {1} ( t) \dots | ||
+ | T _ {2 ^ {i - 2 } } ( t), T _ {2 ^ {i - 2 } } ( t), | ||
+ | $$ | ||
− | + | instead of on $ T _ {N} ( t) $; | |
+ | this substantially simplifies the problem of ordering the steps $ h _ {j} $ | ||
+ | and increases the effectiveness of the computation, especially for large $ N $. | ||
+ | In this case the error decreases optimally after using all the ordered roots of each polynomial, which is important for a simplification of the control of the computation. If $ P ( Ax _ {0} ) ( b - Ay _ {0} ) = 0 $, | ||
+ | then $ P ( x _ {0} A) ( y ^ {+} - y _ {0} ) = 0 $; | ||
+ | moreover, if $ P ( x _ {0} A) $ | ||
+ | is an orthogonal projection operator, then $ y ^ {+} - y _ {0} $ | ||
+ | is the normal solution of the functional equation $ A ( y ^ {+} - y _ {0} ) = b - Ay _ {0} $. | ||
+ | On the other hand (7) implies (for $ k = 0, 1 , . . . $) | ||
− | + | $$ | |
+ | b - Ay _ {k + 1 } = \ | ||
+ | U _ {k + 1 } ( b - Ay _ {0} ),\ \ | ||
+ | U _ {k + 1 } = \ | ||
+ | \prod _ {i = 1 } ^ { {k } + 1 } | ||
+ | ( I - h _ {i} Ax _ {0} ). | ||
+ | $$ | ||
− | + | If $ P ( Ax _ {0} ) $ | |
+ | is a projection operator, then | ||
− | + | $$ | |
+ | \| U _ {k} - P ( Ax _ {0} ) \| \leq \ | ||
− | + | \frac{2q ^ {k} }{( 1 + q ^ {2k} ) } | |
+ | \rightarrow 0, | ||
+ | $$ | ||
− | as | + | as $ k \rightarrow \infty $ |
+ | $ ( q = ( \sqrt M - \sqrt m )/( \sqrt M + \sqrt m )) $. | ||
− | There are also direct methods of variation of the parameter of Euler type using the recurrence relations for Chebyshev, and closely related, polynomials (without using the roots of these polynomials explicitly); for these the error decreases at each step. These are multi-step methods and have an increased rate of convergence. Applying Euler's method with backward step | + | There are also direct methods of variation of the parameter of Euler type using the recurrence relations for Chebyshev, and closely related, polynomials (without using the roots of these polynomials explicitly); for these the error decreases at each step. These are multi-step methods and have an increased rate of convergence. Applying Euler's method with backward step $ h _ {k + 1 } $ |
+ | to problem (5) gives the following effective class of methods: | ||
− | + | $$ | |
+ | y _ {k + 1 } = \ | ||
+ | y _ {k} + h _ {k + 1 } | ||
+ | ( I + h _ {k + 1 } x _ {0} A) ^ {-1} | ||
+ | x _ {0} ( b - Ay _ {0} ),\ \ | ||
+ | k = 0, 1 \dots | ||
+ | $$ | ||
whence | whence | ||
− | + | $$ | |
+ | b - Ay _ {k + 1 } = \ | ||
+ | W _ {k + 1 } | ||
+ | ( b - Ay _ {0} ), | ||
+ | $$ | ||
− | + | $$ | |
+ | W _ {k + 1 } = \prod _ {i = 1 } ^ {k + 1 } ( I + h _ {i} Ax _ {0} ) ^ {-1} ,\ k = 0, 1 , . . . . | ||
+ | $$ | ||
− | Moreover, if | + | Moreover, if $ P ( Ax _ {0} ) $ |
+ | is a projection operator and $ h _ {i} > 0 $ | ||
+ | is such that for any $ i $, | ||
− | + | $$ | |
+ | \| ( I - P ( Ax _ {0} )) ( I + h _ {i} Ax _ {0} ) ^ {-1} \| | ||
+ | \leq \rho _ {i} < 1, | ||
+ | $$ | ||
then | then | ||
− | + | $$ | |
+ | \| W _ {k} - P ( Ax _ {0} ) \| \leq \ | ||
+ | \prod_{i=1}^ { k } \rho _ {i} \rightarrow 0,\ \ | ||
+ | \textrm{ as } k \rightarrow \infty . | ||
+ | $$ | ||
− | There are analogous methods for solving non-linear functional equations and operator equations, and also methods with increased accuracy of Runge–Kutta type with | + | There are analogous methods for solving non-linear functional equations and operator equations, and also methods with increased accuracy of Runge–Kutta type with $ s $- |
+ | th order accuracy, and other types. | ||
The solution of a functional equation in the narrow sense, or of systems of such equations, can be a specific function as well as a class of functions depending on arbitrary parameters or arbitrary functions. In the theory of functional equations there are few general methods known for solving such equations. Hence it is necessary, as a rule, to investigate in each particular case the degree of generality of the solution obtained. | The solution of a functional equation in the narrow sense, or of systems of such equations, can be a specific function as well as a class of functions depending on arbitrary parameters or arbitrary functions. In the theory of functional equations there are few general methods known for solving such equations. Hence it is necessary, as a rule, to investigate in each particular case the degree of generality of the solution obtained. | ||
− | One of the more-or-less general methods for solving functional equations is by reducing them to equations in finite differences. For example, take a functional equation of order | + | One of the more-or-less general methods for solving functional equations is by reducing them to equations in finite differences. For example, take a functional equation of order $ n $ |
+ | and class 1 of the form | ||
+ | |||
+ | $$ \tag{9 } | ||
+ | F ( x, \phi ( x) \dots \phi ^ {n} ( x)) = 0, | ||
+ | $$ | ||
+ | |||
+ | where the function $ F $ | ||
+ | is given and $ \phi ( x) $ | ||
+ | is the unknown function. Here | ||
+ | |||
+ | $$ | ||
+ | \phi ^ {0} ( x) = x,\ | ||
+ | \phi ^ {1} ( x) = \phi ( x),\ | ||
+ | \phi ^ {2} ( x) = \phi ( \phi ( x)) , . . . . | ||
+ | $$ | ||
− | + | Further, assume that $ x = u _ {z} $, | |
+ | $ \phi ( x) = u _ {z + 1 } $. | ||
+ | The transition from $ x $ | ||
+ | to $ \phi ( x) $ | ||
+ | is replaced by introducing a new variable $ z $ | ||
+ | and increasing $ z $ | ||
+ | by 1 in the function $ u _ {z} $. | ||
+ | After such a change, equation (9) has the form | ||
− | + | $$ \tag{10 } | |
+ | F ( u _ {z} \dots u _ {z + n } ) = 0. | ||
+ | $$ | ||
− | + | The solution of the $ n $- | |
+ | th order finite-difference equation (10) gives an expression for $ u _ {z} $ | ||
+ | in terms of $ z $ | ||
+ | and $ n $ | ||
+ | arbitrary periodic functions $ C _ {i} $ | ||
+ | in $ z $ | ||
+ | with period one. The most general form of the solution to (9) is a system of two compatible equations | ||
− | + | $$ \tag{11 } | |
+ | \left . | ||
+ | \begin{array}{c} | ||
− | + | u _ {z} = x = f ( z, C _ {1} \dots C _ {n} ), | |
+ | \\ | ||
− | + | u _ {z + 1 } = \phi ( x) = \ | |
+ | f ( z + 1, C _ {1} \dots C _ {n} ). | ||
+ | \end{array} | ||
− | + | \right \} | |
+ | $$ | ||
− | By choosing particular values for | + | By choosing particular values for $ C _ {i} $ |
+ | one can eliminate $ z $ | ||
+ | from (11) and thus obtain a particular solution to (9). For example, for the $ 2 $- | ||
+ | nd order functional equation | ||
− | + | $$ \tag{12 } | |
+ | \phi [ \phi ( x)] + a \phi ( x) + bx = 0 | ||
+ | $$ | ||
the general solution (11) takes the form | the general solution (11) takes the form | ||
− | + | $$ \tag{13 } | |
+ | \left . | ||
+ | \begin{array}{c} | ||
+ | |||
+ | u _ {z} = x = \ | ||
+ | C _ {1} \lambda _ {1} ^ {z} + | ||
+ | C _ {2} \lambda _ {2} ^ {z} , | ||
+ | \\ | ||
+ | |||
+ | u _ {z + 1 } = \phi ( x) = \ | ||
+ | C _ {1} \lambda _ {1} ^ {z + 1 } + | ||
+ | C _ {2} \lambda _ {2} ^ {z + 1 } , | ||
+ | \end{array} | ||
+ | |||
+ | \right \} | ||
+ | $$ | ||
+ | |||
+ | where $ \lambda _ {1} $ | ||
+ | and $ \lambda _ {2} $ | ||
+ | are the roots of the quadratic equation $ \lambda ^ {2} + a \lambda + b = 0 $, | ||
+ | and $ C _ {1} $ | ||
+ | and $ C _ {2} $ | ||
+ | are arbitrary fixed periodic functions with period one. If $ C _ {1} $ | ||
+ | and $ C _ {2} $ | ||
+ | are taken to be arbitrary constants and $ z $ | ||
+ | is eliminated from (13), then the complete solution of (12) is obtained: | ||
+ | |||
+ | $$ | ||
+ | |||
+ | \frac{x \lambda _ {2} - \phi ( x) }{\lambda _ {2} - \lambda _ {1} } | ||
+ | = \ | ||
+ | C _ {1} \left [ | ||
+ | |||
+ | \frac{x \lambda _ {1} - \phi ( x) }{C _ {2} ( \lambda _ {1} - \lambda _ {2} ) } | ||
+ | |||
+ | \right ] ^ \gamma ,\ \ | ||
+ | \gamma = \ | ||
− | + | \frac{ \mathop{\rm log} \lambda _ {1} }{ \mathop{\rm log} \lambda _ {2} } | |
+ | . | ||
+ | $$ | ||
− | + | The method of reduction to finite-difference equations is also applied in solving direct problems in functional calculus. For example, let the function $ \phi ( x) = a + bx $ | |
+ | be given and suppose it is required to construct the expression $ \phi ^ {n} ( x) $( | ||
+ | where $ \phi ^ {i} ( x) = \phi ( \phi ^ {i - 1 } ( x) ) $). | ||
+ | Assume that $ \phi ^ {n} ( x) = u _ {n} $ | ||
+ | and write the equation in finite differences $ u _ {n + 1 } = a + bu _ {n} $, | ||
+ | the solution of which is $ u _ {n} = Cb ^ {n} + a/( 1 - b) $. | ||
+ | Thus, for $ n = 0 $ | ||
+ | one obtains $ u _ {0} = x = c + a/( 1 - b) $, | ||
+ | that is, $ C = x - a/( 1 - b) $. | ||
+ | Thus, | ||
− | + | $$ | |
+ | \phi ^ {n} ( x) = \ | ||
+ | \alpha _ {n} + \beta _ {n} x,\ \ | ||
+ | \alpha _ {n} = \ | ||
− | + | \frac{a ( b ^ {n} - 1) }{b - 1 } | |
+ | ,\ \ | ||
+ | \beta _ {n} = b ^ {n} . | ||
+ | $$ | ||
− | Here, if desired, one can write the functional equation | + | Here, if desired, one can write the functional equation $ \phi ^ {n} ( x) = \beta _ {n - 1 } \phi ( x) + \alpha _ {n - 1 } $, |
+ | which will have a solution $ \phi ( x) $ | ||
+ | for any $ n $, | ||
+ | etc. By solving this functional equation by the same method one can construct other solutions $ \phi ( x) $. | ||
+ | In particular, for odd $ n $ | ||
+ | one gets another real solution $ \phi ( x) = - bx + a ( 1 + b)/( 1 - b) $. | ||
One also applies substitution methods to solve functional equations. For example, suppose one has the functional equation | One also applies substitution methods to solve functional equations. For example, suppose one has the functional equation | ||
− | + | $$ \tag{14 } | |
+ | f ( x + y) + f ( x - y) = 2f ( x) \cos y. | ||
+ | $$ | ||
Applying successively the substitutions | Applying successively the substitutions | ||
− | + | $$ | |
+ | x = 0, y = t; \ \ | ||
+ | x = { | ||
+ | \frac \pi {2} | ||
+ | } + | ||
+ | t, y = { | ||
+ | \frac \pi {2} | ||
+ | } ; \ \ | ||
+ | x = { | ||
+ | \frac \pi {2} | ||
+ | } ,\ | ||
+ | y = { | ||
+ | \frac \pi {2} | ||
+ | } + t, | ||
+ | $$ | ||
(14) implies, respectively, | (14) implies, respectively, | ||
− | + | $$ | |
+ | f ( t) + f (- t) = 2a \cos t,\ \ | ||
+ | f ( \pi + t) + f ( t) = 0 | ||
+ | $$ | ||
and | and | ||
− | + | $$ | |
+ | f ( \pi + t) + f (- t) = \ | ||
+ | 2b \cos \left ( { | ||
+ | \frac \pi {2} | ||
+ | } + t \right ) = - 2b \sin t, | ||
+ | $$ | ||
− | where | + | where $ f ( 0) = a $, |
+ | $ f ( \pi /2) = b $. | ||
+ | Hence, subtracting the third equation from the sum of the first two equations one obtains $ 2f ( t) = 2a \cos t + 2b \sin t $. | ||
+ | The function $ f ( x) = a \cos x + b \sin x $ | ||
+ | is the general solution of (14). This method can also be applied to other equations of the type | ||
− | + | $$ | |
+ | H ( f ( x + y), f ( x- y), f ( x), x, y) = 0 | ||
+ | $$ | ||
− | under certain assumptions on the function | + | under certain assumptions on the function $ H $. |
+ | Various other substitutions can be applied to equations of other types. | ||
The substitution method is also applied to reduce some functional equations to others of the same type; in particular, to functional equations with known solutions. For example, the functional equation | The substitution method is also applied to reduce some functional equations to others of the same type; in particular, to functional equations with known solutions. For example, the functional equation | ||
− | + | $$ \tag{15 } | |
+ | f \left ( { | ||
+ | \frac{x + y }{2} | ||
+ | } \right ) = \ | ||
+ | |||
+ | \frac{f ( x) }{2} | ||
+ | + | ||
+ | |||
+ | \frac{f ( y) }{2} | ||
+ | |||
+ | $$ | ||
can be reduced to the Cauchy functional equation | can be reduced to the Cauchy functional equation | ||
− | + | $$ \tag{16 } | |
+ | f ( x + y) = f ( x) + f ( y), | ||
+ | $$ | ||
+ | |||
+ | which has the continuous solution $ f ( x) = Cx $. | ||
+ | In order to obtain this solution, $ x + y $ | ||
+ | is substituted in (15) for $ x $ | ||
+ | and 0 for $ y $: | ||
− | + | $$ | |
+ | f \left ( { | ||
+ | \frac{x + y }{2} | ||
+ | } \right ) = \ | ||
− | + | \frac{f ( x + y) }{2} | |
+ | + | ||
+ | { | ||
+ | \frac{a}{2} | ||
+ | } ,\ \ | ||
+ | a = f ( 0). | ||
+ | $$ | ||
− | Comparing this with (15) one obtains a functional equation of the form | + | Comparing this with (15) one obtains a functional equation of the form $ f ( x + y) = f ( x) + f ( y) - a $, |
+ | whence $ \phi ( x + y) = \phi ( x) + \phi ( y) $, | ||
+ | where $ \phi ( x) = f ( x) - a $ | ||
+ | and thus $ \phi ( x) = Cx $. | ||
+ | The function $ f ( x) = Cx + a $ | ||
+ | is the solution. | ||
Taking logarithms and other methods can also be used to reduce a functional equation to another of the same type. For example, solving the functional equation | Taking logarithms and other methods can also be used to reduce a functional equation to another of the same type. For example, solving the functional equation | ||
− | + | $$ \tag{17 } | |
+ | f ( x + y) = f ( x) f ( y) | ||
+ | $$ | ||
− | can be reduced to solving the functional equation (16) by taking logarithms. A continuous function | + | can be reduced to solving the functional equation (16) by taking logarithms. A continuous function $ f ( x) $ |
+ | satisfying (17) is always strictly positive and the function $ \phi ( x) = \mathop{\rm log} f ( x) $ | ||
+ | is continuous (as a composition of continuous functions) and satisfies the condition $ \phi ( x + y) = \phi ( x) + \phi ( y) $, | ||
+ | $ \phi ( x) = Cx $. | ||
+ | The solution thus is $ f ( x) = e ^ {Cx} = a ^ {x} $. | ||
− | In many cases the method of reduction to differential equations can also be applied to solve functional equations. For example, the functional equation (14) can be reduced to an equation of the type | + | In many cases the method of reduction to differential equations can also be applied to solve functional equations. For example, the functional equation (14) can be reduced to an equation of the type $ f ^ { \prime\prime } ( x) = - f ( x) $. |
+ | There are other functional equations which reduce to this equation. This method only gives solutions in the class of differentiable functions. For example, the solution to the Cauchy functional equation (16) with the condition that $ f ( x) $ | ||
+ | is differentiable can be found in the following way. Differentiating (16) with respect to $ x $ | ||
+ | one obtains $ f ^ { \prime } ( x + y) = f ^ { \prime } ( x) $, | ||
+ | hence, since $ y $ | ||
+ | here is arbitrary, $ f ^ { \prime } ( x) = C $. | ||
+ | Then integration gives $ f ( x) = Cx + C _ {1} $, | ||
+ | where $ C _ {1} $ | ||
+ | is a new constant. Substituting again the expression found for $ f ( x) $ | ||
+ | in (16) one ascertains that for all values of $ x $ | ||
+ | and $ y $, | ||
+ | $ C _ {1} = 0 $. | ||
+ | The function $ f ( x) = Cx $ | ||
+ | is the solution. | ||
Iteration methods are also applied in a number of cases to solve functional equations. | Iteration methods are also applied in a number of cases to solve functional equations. | ||
Line 189: | Line 583: | ||
====References==== | ====References==== | ||
<table><TR><TD valign="top">[1]</TD> <TD valign="top"> L.V. Kantorovich, "On Newton's method" ''Trudy Mat. Inst. Steklov.'' , '''28''' (1949) pp. 104–144 (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> D.F. Davidenko, , ''Approximate methods for solving operator equations and their applications'' , Irkutsk (1982) pp. 71–83 (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> M.A. Mertvetsova, "Analogue of the process of tangent hyperbolas for general functional equations" ''Dokl. Akad. Nauk SSSR'' , '''88''' : 4 (1953) pp. 611–614 (In Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> M.I. Nechepurenko, "On Chebychev's method for functional equations" ''Uspekhi Mat. Nauk'' , '''9''' : 2 (1954) pp. 163–170 (In Russian)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top"> E.E. Tamme, "On approximately solving functional equations by the method of series expansion in the inverse operator" ''Dokl. Akad. Nauk SSSR'' , '''103''' : 5 (1955) pp. 769–772 (In Russian)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top"> I.P. Mysovskikh, "On convergence of L.V. Kantorovich's method for solving non-linear functional equations, and its applications" ''Vestn. Leningrad. Univ.'' , '''11''' (1953) pp. 25–48 (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top"> D.F. Davidenko, , ''Conf. programming and mathematical methods for solving physical problems'' (1974) pp. 542–548 (In Russian)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top"> B.A. Bel'tyukov, "On a certain method of solution of non-linear functional equations" ''Zh. Vychisl. Mat. i Mat. Fiz.'' , '''5''' : 5 (1965) pp. 927–931 (In Russian)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top"> O. Vaarmann, "Approximations of pseudo-inverse operators as applied to the solution of non-linear equations" ''Izv. Akad. Nauk EstSSR'' , '''20''' : 4 (1971) pp. 386–393 (In Russian) (Estonian and English summaries)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top"> D.F. Davidenko, , ''Theoretical and applied problems in computing'' , Moscow (1981) pp. 61–63 (In Russian)</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top"> A.I. Liventsov, ''Mat. Sb.'' , '''8''' : 1 (1876) pp. 80–160</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top"> D.M. Sintsov, "Notes on functional calculus" ''Izv. Fiz.-Mat. Obshch. Kazan. Univ. (2)'' , '''13''' : 2 (1903) pp. 46–72 (In Russian)</TD></TR></table> | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> L.V. Kantorovich, "On Newton's method" ''Trudy Mat. Inst. Steklov.'' , '''28''' (1949) pp. 104–144 (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top"> M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top"> A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top"> N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)</TD></TR><TR><TD valign="top">[5]</TD> <TD valign="top"> D.F. Davidenko, , ''Approximate methods for solving operator equations and their applications'' , Irkutsk (1982) pp. 71–83 (In Russian)</TD></TR><TR><TD valign="top">[6]</TD> <TD valign="top"> M.A. Mertvetsova, "Analogue of the process of tangent hyperbolas for general functional equations" ''Dokl. Akad. Nauk SSSR'' , '''88''' : 4 (1953) pp. 611–614 (In Russian)</TD></TR><TR><TD valign="top">[7]</TD> <TD valign="top"> M.I. Nechepurenko, "On Chebychev's method for functional equations" ''Uspekhi Mat. Nauk'' , '''9''' : 2 (1954) pp. 163–170 (In Russian)</TD></TR><TR><TD valign="top">[8]</TD> <TD valign="top"> E.E. Tamme, "On approximately solving functional equations by the method of series expansion in the inverse operator" ''Dokl. Akad. Nauk SSSR'' , '''103''' : 5 (1955) pp. 769–772 (In Russian)</TD></TR><TR><TD valign="top">[9]</TD> <TD valign="top"> I.P. Mysovskikh, "On convergence of L.V. Kantorovich's method for solving non-linear functional equations, and its applications" ''Vestn. Leningrad. Univ.'' , '''11''' (1953) pp. 25–48 (In Russian)</TD></TR><TR><TD valign="top">[10]</TD> <TD valign="top"> D.F. Davidenko, , ''Conf. programming and mathematical methods for solving physical problems'' (1974) pp. 542–548 (In Russian)</TD></TR><TR><TD valign="top">[11]</TD> <TD valign="top"> B.A. Bel'tyukov, "On a certain method of solution of non-linear functional equations" ''Zh. Vychisl. Mat. i Mat. Fiz.'' , '''5''' : 5 (1965) pp. 927–931 (In Russian)</TD></TR><TR><TD valign="top">[12]</TD> <TD valign="top"> O. Vaarmann, "Approximations of pseudo-inverse operators as applied to the solution of non-linear equations" ''Izv. Akad. Nauk EstSSR'' , '''20''' : 4 (1971) pp. 386–393 (In Russian) (Estonian and English summaries)</TD></TR><TR><TD valign="top">[13]</TD> <TD valign="top"> D.F. Davidenko, , ''Theoretical and applied problems in computing'' , Moscow (1981) pp. 61–63 (In Russian)</TD></TR><TR><TD valign="top">[14]</TD> <TD valign="top"> A.I. Liventsov, ''Mat. Sb.'' , '''8''' : 1 (1876) pp. 80–160</TD></TR><TR><TD valign="top">[15]</TD> <TD valign="top"> D.M. Sintsov, "Notes on functional calculus" ''Izv. Fiz.-Mat. Obshch. Kazan. Univ. (2)'' , '''13''' : 2 (1903) pp. 46–72 (In Russian)</TD></TR></table> | ||
− | |||
− | |||
====Comments==== | ====Comments==== |
Latest revision as of 19:16, 16 January 2024
Methods for finding exact or approximate solutions of specific or abstract functional equations, that is, equations of the form
$$ \tag{1 } P ( x) = y $$
where $ P ( x) $ is a certain, generally speaking non-linear, operator transforming the elements of a $ B $- space $ X $( or a space of some other type) into elements of a space $ Y $ of the same type (see Functional equation). Exact solutions in the form of analytic expressions have been obtained only for a few types of functional equations, therefore approximate methods of solution are especially valuable.
A number of methods have evolved to find solutions of general functional equations of the form (1). For example, the method of infinite power series, the method of successive approximations, the Galerkin method (the method of moments), the method of tangent hyperbolas, Chebyshev's method (of tangent parabolas), the Newton–Kantorovich method and its modifications, the method of steepest descent (cf. Steepest descent, method of), etc., as well as the method of variation of the parameter (direct, iterative and combined, cf. Parameter, method of variation of the) of particular types and its various modifications, including successive approximations to the inverse operator. General methods have been applied to solve various specific functional equations in mathematical analysis. In addition there are special methods for solving specific functional equations, including numerical methods, for example, the grid method, etc. The method of variation of the parameter, the Newton–Kantorovich method and certain other methods indicated also have theoretical significance, since they can be used to draw conclusions about the existence, uniqueness and the domain of location of a solution of a functional equation without finding the solution itself, which is sometimes no less important then the actual value of the solution. The direct method of variation of the parameter is considered below as an example.
In the Banach space $ [ X] = [ X \rightarrow X] $ of linear bounded operators, let a non-linear operator ordinary differential equation be given on the infinite interval:
$$ \tag{2 } \frac{dx }{d \lambda } = \ x ( I - Ax) \ \ (= ( I - xA) x),\ \ x ( 0) = x _ {0} , $$
where $ A, x _ {0} , I \in [ X] $( $ A $, $ x _ {0} $ are given, $ I $ is the identity operator), and $ x ( \lambda ) $ is an abstract function with values in $ [ X] $( $ X $ is a $ B $- space). The problem of constructing the inverse operator $ A ^ {-1} $, for an invertible operator $ A $, the solution of a linear operator equation of the form $ I - Ax = 0 $, etc., are all reduced to solving equation (2) by the direct method of variation of the parameter. If the spectrum of the operator $ Ax _ {0} $( respectively, $ x _ {0} A $) is situated in the right half-plane, then problem (2) has a unique solution $ x ( \lambda ) $, $ x ( 0) = x _ {0} $, $ \lambda \in [ 0, \infty ) $.
An analytic application to the Cauchy problem (2) of methods for the numerical integration of ordinary differential equations leads to a whole class of direct methods of variation of the parameter for solving (2), and consequently also to the solutions of problems that reduce to (2). For example, the Euler method with irregular step $ h _ {k} $ leads to the following method for constructing $ A ^ {-1} $:
$$ \tag{3 } x _ {k + 1 } = \ x _ {k} + h _ {k} x _ {k} ( I - Ax _ {k} ) \ \ ( = x _ {k} + h _ {k} ( I - x _ {k} A) x _ {k} ), $$
$$ k = 0, 1 , . . . . $$
The steps $ h _ {k} $ in (3) are chosen in various ways. When the spectrum of $ I - Ax _ {0} $( respectively, $ I - x _ {0} A $) is on the real axis in an interval $ [- \rho _ {0} , 0] $, $ 0 < \rho _ {0} < \infty $, then the following method for choosing $ h _ {k} $ is very effective:
$$ \tag{4 } h _ {k} = \ { \frac{1}{1 + \rho _ {k} } } ,\ \ \rho _ {k + 1 } = \ \rho _ {k} L _ {k} ,\ \ 4L _ {k} = \ \frac{\rho _ {k} }{1 + \rho _ {k} } , $$
$$ k = 0, 1 \dots \ h _ {k + 1 } = \frac{4h _ {k} }{( 1 + h _ {k} ) ^ {2} } . $$
Here, the rate of convergence in (3) is of order higher than $ 2 ^ {k} $, and the norm of the discrepancy decreases according to the law $ \rho _ {k + 1 } = \rho _ {k} ^ {2} /4 ( 1 + \rho _ {k} ) $. The case of an interval $ [ 0, \overline \rho \; _ {0} ] $, $ 0 < \overline \rho \; _ {0} < 1 $, can be reduced, with advantage, to the one-step method (3) considered, where
$$ h = \ { \frac{1}{1 - \overline \rho \; _ {0} } } ,\ \ \rho _ {0} = \ \frac{\overline \rho \; {} _ {0} ^ {2} }{4 ( 1 - \overline \rho \; _ {0} ) } . $$
Regarding the convergence of (3), (4), there are some results, stated for specific operator equations, based on general facts and depending on which space is taken as a base.
The problem of constructing projections of the form
$$ \left . P ( Ax _ {0} ) \ \ ( \textrm{ or } P ( x _ {0} A)),\ \ P ( K) = \ e ^ {- K \lambda } \right | _ {\lambda = \infty } ,\ \ K \in [ X], $$
also reduces to the solution of problem (2) (the case when the spectrum of $ I - Ax _ {0} $( respectively, $ I - x _ {0} A $) is in $ [- \rho _ {0} , 1] $) and so does the problem of constructing pseudo-inverse operators $ x ^ {+} $( respectively, $ {} ^ {+} x $) for the operator $ A $ such that
$$ I - Ax ^ {+} = \ P ( Ax _ {0} ),\ \ I - {} ^ {+} xA = \ P ( x _ {0} A),\ \ x ^ {+} = {} ^ {+} x. $$
For a direct construction of the projections $ P ( Ax _ {0} ) $( respectively, $ P ( x _ {0} A) $), the method (3), (4) can be rewritten in the form
$$ P _ {k + 1 } = \ P _ {k} + h _ {k} P _ {k} ( P _ {k} - I),\ \ k = 0, 1 \dots $$
$$ P _ {0} = I - Ax _ {0} \ (= I - x _ {0} A). $$
Depending on the location of the spectrum and the properties of $ A $ one can take, for example, the operators $ I $, $ A $, $ A ^ {*} $, etc., in place of $ x _ {0} $.
Using the direct method of variation of the parameter one can reduce to the abstract linear functional ordinary differential equation $ ( 0 \leq \lambda \leq \infty ) $
$$ \tag{5 } \frac{dy }{d \lambda } = \ x _ {0} ( b - Ay),\ \ y ( 0) = y _ {0} , $$
where $ b, y _ {0} \in X $, $ x _ {0} , A \in [ X] $ and $ y ( \lambda ) $ is an abstract function with values in $ X $, the problem of constructing directly the pseudo-solutions $ y ^ {+} $ of a linear functional equation of the form
$$ \tag{6 } b - Ay ^ {+} = \ P ( Ax _ {0} ) ( b - Ay _ {0} ) $$
or of the functional equations $ b - Ay ^ {+} = 0 $ if $ P ( Ax _ {0} ) ( b - Ay _ {0} ) = 0 $. Other problems also reduce to (5), including problems for ordinary and partial differential equations, integral equations, etc. The formula
$$ y ( \lambda ) = \ e ^ {- x _ {0} A \lambda } y _ {0} + r ( \lambda ) x _ {0} b,\ \ r ( \lambda ) = \ \int\limits _ { 0 } ^ \lambda e ^ {- x _ {0} A ( \lambda - s) } ds, $$
gives the unique solution to equation (5) satisfying $ y ( 0) = y _ {0} $.
For example, applying Euler's method with forward step $ h _ {k + 1 } $ to problem (5) leads to the following method for constructing pseudo-solutions $ y ^ {+} $ to (6):
$$ \tag{7 } y _ {k + 1 } = \ y _ {k} + h _ {k + 1 } x _ {0} ( b - Ay _ {k} ),\ \ k = 0, 1 , . . . . $$
The steps $ h _ {k + 1 } $ are chosen, for example, in terms of the roots of the Chebyshev polynomials $ T _ {N} ( t) $:
$$ \tag{8 } h _ {j} = \ { \frac{2}{[ M + m - ( M - m) t _ {j} ] } } ,\ \ t _ {j} = \cos \left [ { \frac{( 2 \kappa _ {j} - 1) \pi }{2N} } \right ] , $$
$$ j = 1 \dots N, $$
when $ x _ {0} A $ is a self-adjoint operator with non-zero part of the spectrum in $ [ m, M] $, $ 0 < m \leq M $. Here, $ P _ {N} = ( \kappa _ {1} \dots \kappa _ {N} ) $ is a certain permutation of $ 1 \dots N $ in order that the computation be stable and $ N $ is the degree of the required polynomial. The method (7), (8) gives an optimum estimate of convergence only at the $ N $- th step. When $ N = 2 ^ {i} $( or $ \approx 2 ^ {i} $) the choice of steps in (7) is very effective if they depend successively on the roots of the Chebyshev polynomials
$$ T _ {2} ( t) - T _ {0} ( t),\ T _ {1} ( t), T _ {1} ( t) \dots T _ {2 ^ {i - 2 } } ( t), T _ {2 ^ {i - 2 } } ( t), $$
instead of on $ T _ {N} ( t) $; this substantially simplifies the problem of ordering the steps $ h _ {j} $ and increases the effectiveness of the computation, especially for large $ N $. In this case the error decreases optimally after using all the ordered roots of each polynomial, which is important for a simplification of the control of the computation. If $ P ( Ax _ {0} ) ( b - Ay _ {0} ) = 0 $, then $ P ( x _ {0} A) ( y ^ {+} - y _ {0} ) = 0 $; moreover, if $ P ( x _ {0} A) $ is an orthogonal projection operator, then $ y ^ {+} - y _ {0} $ is the normal solution of the functional equation $ A ( y ^ {+} - y _ {0} ) = b - Ay _ {0} $. On the other hand (7) implies (for $ k = 0, 1 , . . . $)
$$ b - Ay _ {k + 1 } = \ U _ {k + 1 } ( b - Ay _ {0} ),\ \ U _ {k + 1 } = \ \prod _ {i = 1 } ^ { {k } + 1 } ( I - h _ {i} Ax _ {0} ). $$
If $ P ( Ax _ {0} ) $ is a projection operator, then
$$ \| U _ {k} - P ( Ax _ {0} ) \| \leq \ \frac{2q ^ {k} }{( 1 + q ^ {2k} ) } \rightarrow 0, $$
as $ k \rightarrow \infty $ $ ( q = ( \sqrt M - \sqrt m )/( \sqrt M + \sqrt m )) $.
There are also direct methods of variation of the parameter of Euler type using the recurrence relations for Chebyshev, and closely related, polynomials (without using the roots of these polynomials explicitly); for these the error decreases at each step. These are multi-step methods and have an increased rate of convergence. Applying Euler's method with backward step $ h _ {k + 1 } $ to problem (5) gives the following effective class of methods:
$$ y _ {k + 1 } = \ y _ {k} + h _ {k + 1 } ( I + h _ {k + 1 } x _ {0} A) ^ {-1} x _ {0} ( b - Ay _ {0} ),\ \ k = 0, 1 \dots $$
whence
$$ b - Ay _ {k + 1 } = \ W _ {k + 1 } ( b - Ay _ {0} ), $$
$$ W _ {k + 1 } = \prod _ {i = 1 } ^ {k + 1 } ( I + h _ {i} Ax _ {0} ) ^ {-1} ,\ k = 0, 1 , . . . . $$
Moreover, if $ P ( Ax _ {0} ) $ is a projection operator and $ h _ {i} > 0 $ is such that for any $ i $,
$$ \| ( I - P ( Ax _ {0} )) ( I + h _ {i} Ax _ {0} ) ^ {-1} \| \leq \rho _ {i} < 1, $$
then
$$ \| W _ {k} - P ( Ax _ {0} ) \| \leq \ \prod_{i=1}^ { k } \rho _ {i} \rightarrow 0,\ \ \textrm{ as } k \rightarrow \infty . $$
There are analogous methods for solving non-linear functional equations and operator equations, and also methods with increased accuracy of Runge–Kutta type with $ s $- th order accuracy, and other types.
The solution of a functional equation in the narrow sense, or of systems of such equations, can be a specific function as well as a class of functions depending on arbitrary parameters or arbitrary functions. In the theory of functional equations there are few general methods known for solving such equations. Hence it is necessary, as a rule, to investigate in each particular case the degree of generality of the solution obtained.
One of the more-or-less general methods for solving functional equations is by reducing them to equations in finite differences. For example, take a functional equation of order $ n $ and class 1 of the form
$$ \tag{9 } F ( x, \phi ( x) \dots \phi ^ {n} ( x)) = 0, $$
where the function $ F $ is given and $ \phi ( x) $ is the unknown function. Here
$$ \phi ^ {0} ( x) = x,\ \phi ^ {1} ( x) = \phi ( x),\ \phi ^ {2} ( x) = \phi ( \phi ( x)) , . . . . $$
Further, assume that $ x = u _ {z} $, $ \phi ( x) = u _ {z + 1 } $. The transition from $ x $ to $ \phi ( x) $ is replaced by introducing a new variable $ z $ and increasing $ z $ by 1 in the function $ u _ {z} $. After such a change, equation (9) has the form
$$ \tag{10 } F ( u _ {z} \dots u _ {z + n } ) = 0. $$
The solution of the $ n $- th order finite-difference equation (10) gives an expression for $ u _ {z} $ in terms of $ z $ and $ n $ arbitrary periodic functions $ C _ {i} $ in $ z $ with period one. The most general form of the solution to (9) is a system of two compatible equations
$$ \tag{11 } \left . \begin{array}{c} u _ {z} = x = f ( z, C _ {1} \dots C _ {n} ), \\ u _ {z + 1 } = \phi ( x) = \ f ( z + 1, C _ {1} \dots C _ {n} ). \end{array} \right \} $$
By choosing particular values for $ C _ {i} $ one can eliminate $ z $ from (11) and thus obtain a particular solution to (9). For example, for the $ 2 $- nd order functional equation
$$ \tag{12 } \phi [ \phi ( x)] + a \phi ( x) + bx = 0 $$
the general solution (11) takes the form
$$ \tag{13 } \left . \begin{array}{c} u _ {z} = x = \ C _ {1} \lambda _ {1} ^ {z} + C _ {2} \lambda _ {2} ^ {z} , \\ u _ {z + 1 } = \phi ( x) = \ C _ {1} \lambda _ {1} ^ {z + 1 } + C _ {2} \lambda _ {2} ^ {z + 1 } , \end{array} \right \} $$
where $ \lambda _ {1} $ and $ \lambda _ {2} $ are the roots of the quadratic equation $ \lambda ^ {2} + a \lambda + b = 0 $, and $ C _ {1} $ and $ C _ {2} $ are arbitrary fixed periodic functions with period one. If $ C _ {1} $ and $ C _ {2} $ are taken to be arbitrary constants and $ z $ is eliminated from (13), then the complete solution of (12) is obtained:
$$ \frac{x \lambda _ {2} - \phi ( x) }{\lambda _ {2} - \lambda _ {1} } = \ C _ {1} \left [ \frac{x \lambda _ {1} - \phi ( x) }{C _ {2} ( \lambda _ {1} - \lambda _ {2} ) } \right ] ^ \gamma ,\ \ \gamma = \ \frac{ \mathop{\rm log} \lambda _ {1} }{ \mathop{\rm log} \lambda _ {2} } . $$
The method of reduction to finite-difference equations is also applied in solving direct problems in functional calculus. For example, let the function $ \phi ( x) = a + bx $ be given and suppose it is required to construct the expression $ \phi ^ {n} ( x) $( where $ \phi ^ {i} ( x) = \phi ( \phi ^ {i - 1 } ( x) ) $). Assume that $ \phi ^ {n} ( x) = u _ {n} $ and write the equation in finite differences $ u _ {n + 1 } = a + bu _ {n} $, the solution of which is $ u _ {n} = Cb ^ {n} + a/( 1 - b) $. Thus, for $ n = 0 $ one obtains $ u _ {0} = x = c + a/( 1 - b) $, that is, $ C = x - a/( 1 - b) $. Thus,
$$ \phi ^ {n} ( x) = \ \alpha _ {n} + \beta _ {n} x,\ \ \alpha _ {n} = \ \frac{a ( b ^ {n} - 1) }{b - 1 } ,\ \ \beta _ {n} = b ^ {n} . $$
Here, if desired, one can write the functional equation $ \phi ^ {n} ( x) = \beta _ {n - 1 } \phi ( x) + \alpha _ {n - 1 } $, which will have a solution $ \phi ( x) $ for any $ n $, etc. By solving this functional equation by the same method one can construct other solutions $ \phi ( x) $. In particular, for odd $ n $ one gets another real solution $ \phi ( x) = - bx + a ( 1 + b)/( 1 - b) $.
One also applies substitution methods to solve functional equations. For example, suppose one has the functional equation
$$ \tag{14 } f ( x + y) + f ( x - y) = 2f ( x) \cos y. $$
Applying successively the substitutions
$$ x = 0, y = t; \ \ x = { \frac \pi {2} } + t, y = { \frac \pi {2} } ; \ \ x = { \frac \pi {2} } ,\ y = { \frac \pi {2} } + t, $$
(14) implies, respectively,
$$ f ( t) + f (- t) = 2a \cos t,\ \ f ( \pi + t) + f ( t) = 0 $$
and
$$ f ( \pi + t) + f (- t) = \ 2b \cos \left ( { \frac \pi {2} } + t \right ) = - 2b \sin t, $$
where $ f ( 0) = a $, $ f ( \pi /2) = b $. Hence, subtracting the third equation from the sum of the first two equations one obtains $ 2f ( t) = 2a \cos t + 2b \sin t $. The function $ f ( x) = a \cos x + b \sin x $ is the general solution of (14). This method can also be applied to other equations of the type
$$ H ( f ( x + y), f ( x- y), f ( x), x, y) = 0 $$
under certain assumptions on the function $ H $. Various other substitutions can be applied to equations of other types.
The substitution method is also applied to reduce some functional equations to others of the same type; in particular, to functional equations with known solutions. For example, the functional equation
$$ \tag{15 } f \left ( { \frac{x + y }{2} } \right ) = \ \frac{f ( x) }{2} + \frac{f ( y) }{2} $$
can be reduced to the Cauchy functional equation
$$ \tag{16 } f ( x + y) = f ( x) + f ( y), $$
which has the continuous solution $ f ( x) = Cx $. In order to obtain this solution, $ x + y $ is substituted in (15) for $ x $ and 0 for $ y $:
$$ f \left ( { \frac{x + y }{2} } \right ) = \ \frac{f ( x + y) }{2} + { \frac{a}{2} } ,\ \ a = f ( 0). $$
Comparing this with (15) one obtains a functional equation of the form $ f ( x + y) = f ( x) + f ( y) - a $, whence $ \phi ( x + y) = \phi ( x) + \phi ( y) $, where $ \phi ( x) = f ( x) - a $ and thus $ \phi ( x) = Cx $. The function $ f ( x) = Cx + a $ is the solution.
Taking logarithms and other methods can also be used to reduce a functional equation to another of the same type. For example, solving the functional equation
$$ \tag{17 } f ( x + y) = f ( x) f ( y) $$
can be reduced to solving the functional equation (16) by taking logarithms. A continuous function $ f ( x) $ satisfying (17) is always strictly positive and the function $ \phi ( x) = \mathop{\rm log} f ( x) $ is continuous (as a composition of continuous functions) and satisfies the condition $ \phi ( x + y) = \phi ( x) + \phi ( y) $, $ \phi ( x) = Cx $. The solution thus is $ f ( x) = e ^ {Cx} = a ^ {x} $.
In many cases the method of reduction to differential equations can also be applied to solve functional equations. For example, the functional equation (14) can be reduced to an equation of the type $ f ^ { \prime\prime } ( x) = - f ( x) $. There are other functional equations which reduce to this equation. This method only gives solutions in the class of differentiable functions. For example, the solution to the Cauchy functional equation (16) with the condition that $ f ( x) $ is differentiable can be found in the following way. Differentiating (16) with respect to $ x $ one obtains $ f ^ { \prime } ( x + y) = f ^ { \prime } ( x) $, hence, since $ y $ here is arbitrary, $ f ^ { \prime } ( x) = C $. Then integration gives $ f ( x) = Cx + C _ {1} $, where $ C _ {1} $ is a new constant. Substituting again the expression found for $ f ( x) $ in (16) one ascertains that for all values of $ x $ and $ y $, $ C _ {1} = 0 $. The function $ f ( x) = Cx $ is the solution.
Iteration methods are also applied in a number of cases to solve functional equations.
References
[1] | L.V. Kantorovich, "On Newton's method" Trudy Mat. Inst. Steklov. , 28 (1949) pp. 104–144 (In Russian) |
[2] | M.A. Krasnosel'skii, G.M. Vainikko, P.P. Zabreiko, et al., "Approximate solution of operator equations" , Wolters-Noordhoff (1972) (Translated from Russian) |
[3] | A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest u. Portig K.-D. (1984) (Translated from Russian) |
[4] | N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian) |
[5] | D.F. Davidenko, , Approximate methods for solving operator equations and their applications , Irkutsk (1982) pp. 71–83 (In Russian) |
[6] | M.A. Mertvetsova, "Analogue of the process of tangent hyperbolas for general functional equations" Dokl. Akad. Nauk SSSR , 88 : 4 (1953) pp. 611–614 (In Russian) |
[7] | M.I. Nechepurenko, "On Chebychev's method for functional equations" Uspekhi Mat. Nauk , 9 : 2 (1954) pp. 163–170 (In Russian) |
[8] | E.E. Tamme, "On approximately solving functional equations by the method of series expansion in the inverse operator" Dokl. Akad. Nauk SSSR , 103 : 5 (1955) pp. 769–772 (In Russian) |
[9] | I.P. Mysovskikh, "On convergence of L.V. Kantorovich's method for solving non-linear functional equations, and its applications" Vestn. Leningrad. Univ. , 11 (1953) pp. 25–48 (In Russian) |
[10] | D.F. Davidenko, , Conf. programming and mathematical methods for solving physical problems (1974) pp. 542–548 (In Russian) |
[11] | B.A. Bel'tyukov, "On a certain method of solution of non-linear functional equations" Zh. Vychisl. Mat. i Mat. Fiz. , 5 : 5 (1965) pp. 927–931 (In Russian) |
[12] | O. Vaarmann, "Approximations of pseudo-inverse operators as applied to the solution of non-linear equations" Izv. Akad. Nauk EstSSR , 20 : 4 (1971) pp. 386–393 (In Russian) (Estonian and English summaries) |
[13] | D.F. Davidenko, , Theoretical and applied problems in computing , Moscow (1981) pp. 61–63 (In Russian) |
[14] | A.I. Liventsov, Mat. Sb. , 8 : 1 (1876) pp. 80–160 |
[15] | D.M. Sintsov, "Notes on functional calculus" Izv. Fiz.-Mat. Obshch. Kazan. Univ. (2) , 13 : 2 (1903) pp. 46–72 (In Russian) |
Comments
Reference [a3] has a very complete bibliography for the period 1747–1964.
References
[a1] | J.K. Hale, "Functional differential equations" , Springer (1971) |
[a2] | J. Aczél (ed.) , Functional equations: history, applications and theory , Reidel (1984) |
[a3] | J. Aczél, "Functional equations and their applications" , Acad. Press (1966) |
Functional equation, methods of solution of a. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Functional_equation,_methods_of_solution_of_a&oldid=12978