Namespaces
Variants
Actions

Difference between revisions of "Finite-difference calculus"

From Encyclopedia of Mathematics
Jump to: navigation, search
m (→‎References: better)
m (tex done)
Line 1: Line 1:
{{TEX|want}}
+
{{TEX|done}}
 +
 +
A branch of mathematics in which functions are studied under a discrete change of the argument, as opposed to differential and integral calculus, where the argument changes continuously. Let  $  f $
 +
be a function defined at the points  $  x _ {k} = x _ {0} + kh $(
 +
where  $  h $
 +
is a constant and  $  k $
 +
is an integer). Then
  
A branch of mathematics in which functions are studied under a discrete change of the argument, as opposed to differential and integral calculus, where the argument changes continuously. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f0402301.png" /> be a function defined at the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f0402302.png" /> (where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f0402303.png" /> is a constant and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f0402304.png" /> is an integer). Then
+
$$
 
+
\Delta y _ {k} \  = \
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f0402305.png" /></td> </tr></table>
+
\Delta f _ {k} \  = \
 +
f (x _ {k + 1 }  ) -
 +
f (x _ {k} )
 +
$$
  
 
are the (finite) first-order differences,
 
are the (finite) first-order differences,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f0402306.png" /></td> </tr></table>
+
$$
 +
\Delta  ^ {2} y _ {k} \  = \
 +
\Delta  ^ {2} f _ {k} \  = \
 +
\Delta f _ {k + 1 }  -
 +
\Delta f _ {k} \  = \
 +
f (x _ {k + 2 }  ) -
 +
2f (x _ {k + 1 }  ) +
 +
f (x _ {k} )
 +
$$
  
are the second-order differences<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f0402307.png" />
+
are the second-order differences $  \dots $
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f0402308.png" /></td> </tr></table>
+
$$
 +
\Delta  ^ {n} y _ {k} \  = \
 +
\Delta  ^ {n} f _ {k} \  = \
 +
\Delta ^ {n - 1 }
 +
f _ {k + 1 }  -
 +
\Delta ^ {n - 1 } f _ {k}  $$
  
are the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023010.png" />-th order differences. The differences are conveniently arranged in a table:
+
are the $  n $-
 +
th order differences. The differences are conveniently arranged in a table:
  
<table border="0" cellpadding="0" cellspacing="0" style="background-color:black;"> <tr><td> <table border="0" cellspacing="1" cellpadding="4" style="background-color:black;"> <tbody> <tr> <td colname="1" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023011.png" /></td> <td colname="2" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023012.png" /></td> <td colname="3" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023013.png" /></td> <td colname="4" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023014.png" /></td> <td colname="5" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023015.png" /></td> <td colname="6" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023016.png" /></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023017.png" /></td> <td colname="2" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023018.png" /></td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"></td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"></td> <td colname="2" style="background-color:white;" colspan="1"></td> <td colname="3" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023019.png" /></td> <td colname="4" style="background-color:white;" colspan="1"></td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023020.png" /></td> <td colname="2" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023021.png" /></td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023022.png" /></td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"></td> <td colname="2" style="background-color:white;" colspan="1"></td> <td colname="3" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023023.png" /></td> <td colname="4" style="background-color:white;" colspan="1"></td> <td colname="5" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023024.png" /></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023025.png" /></td> <td colname="2" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023026.png" /></td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023027.png" /></td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023028.png" /></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"></td> <td colname="2" style="background-color:white;" colspan="1"></td> <td colname="3" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023029.png" /></td> <td colname="4" style="background-color:white;" colspan="1"></td> <td colname="5" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023030.png" /></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023031.png" /></td> <td colname="2" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023032.png" /></td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023033.png" /></td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"></td> <td colname="2" style="background-color:white;" colspan="1"></td> <td colname="3" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023034.png" /></td> <td colname="4" style="background-color:white;" colspan="1"></td> <td colname="5" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023035.png" /></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023036.png" /></td> <td colname="2" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023037.png" /></td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023038.png" /></td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023039.png" /></td> <td colname="2" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023040.png" /></td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023041.png" /></td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> </tbody> </table>
+
<table border="0" cellpadding="0" cellspacing="0" style="background-color:black;"> <tr><td> <table border="0" cellspacing="1" cellpadding="4" style="background-color:black;"> <tbody> <tr> <td colname="1" style="background-color:white;" colspan="1"> $  x $
 +
</td> <td colname="2" style="background-color:white;" colspan="1"> $  y $
 +
</td> <td colname="3" style="background-color:white;" colspan="1"> $  \Delta y $
 +
</td> <td colname="4" style="background-color:white;" colspan="1"> $  \Delta  ^ {2} y $
 +
</td> <td colname="5" style="background-color:white;" colspan="1"> $  \Delta  ^ {3} y $
 +
</td> <td colname="6" style="background-color:white;" colspan="1"> $  \dots $
 +
</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"> $  x _ {0} $
 +
</td> <td colname="2" style="background-color:white;" colspan="1"> $  y _ {0} $
 +
</td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"></td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"></td> <td colname="2" style="background-color:white;" colspan="1"></td> <td colname="3" style="background-color:white;" colspan="1"> $  \Delta y _ {0} $
 +
</td> <td colname="4" style="background-color:white;" colspan="1"></td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"> $  x _ {1} $
 +
</td> <td colname="2" style="background-color:white;" colspan="1"> $  y _ {1} $
 +
</td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"> $  \Delta  ^ {2} y _ {0} $
 +
</td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"></td> <td colname="2" style="background-color:white;" colspan="1"></td> <td colname="3" style="background-color:white;" colspan="1"> $  \Delta y _ {1} $
 +
</td> <td colname="4" style="background-color:white;" colspan="1"></td> <td colname="5" style="background-color:white;" colspan="1"> $  \Delta  ^ {3} y _ {0} $
 +
</td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"> $  x _ {2} $
 +
</td> <td colname="2" style="background-color:white;" colspan="1"> $  y _ {2} $
 +
</td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"> $  \Delta  ^ {2} y _ {1} $
 +
</td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"> $  \dots $
 +
</td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"></td> <td colname="2" style="background-color:white;" colspan="1"></td> <td colname="3" style="background-color:white;" colspan="1"> $  \Delta y _ {2} $
 +
</td> <td colname="4" style="background-color:white;" colspan="1"></td> <td colname="5" style="background-color:white;" colspan="1"> $  \Delta  ^ {3} y _ {1} $
 +
</td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"> $  x _ {3} $
 +
</td> <td colname="2" style="background-color:white;" colspan="1"> $  y _ {3} $
 +
</td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"> $  \Delta  ^ {2} y _ {2} $
 +
</td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"></td> <td colname="2" style="background-color:white;" colspan="1"></td> <td colname="3" style="background-color:white;" colspan="1"> $  \dots $
 +
</td> <td colname="4" style="background-color:white;" colspan="1"></td> <td colname="5" style="background-color:white;" colspan="1"> $  \dots $
 +
</td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"> $  x _ {4} $
 +
</td> <td colname="2" style="background-color:white;" colspan="1"> $  y _ {4} $
 +
</td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"> $  \dots $
 +
</td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> <tr> <td colname="1" style="background-color:white;" colspan="1"> $  \dots $
 +
</td> <td colname="2" style="background-color:white;" colspan="1"> $  \dots $
 +
</td> <td colname="3" style="background-color:white;" colspan="1"></td> <td colname="4" style="background-color:white;" colspan="1"> $  \dots $
 +
</td> <td colname="5" style="background-color:white;" colspan="1"></td> <td colname="6" style="background-color:white;" colspan="1"></td> </tr> </tbody> </table>
  
 
</td></tr> </table>
 
</td></tr> </table>
  
An <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023042.png" />-th order difference can be expressed in terms of the quantities <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023043.png" /> by the formula
+
An $  n $-
 +
th order difference can be expressed in terms of the quantities $  y _ {0} ,\  y _ {1} \dots $
 +
by the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023044.png" /></td> </tr></table>
+
$$
 +
\Delta  ^ {n} y _ {k} \  = \
 +
y _ {k + n }  -
 +
\left ( \begin{array}{c}
 +
n \\
 +
1
 +
\end{array}
 +
\right )
 +
y _ {k + n - 1 }  +
 +
\left ( \begin{array}{c}
 +
n \\
 +
2
 +
\end{array}
 +
\right )
 +
y _ {k + n - 2 }  - \dots +
 +
(-1)  ^ {n} y _ {k} .
 +
$$
  
As well as the forward differences <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023045.png" />, one uses the backward differences:
+
As well as the forward differences $  \Delta y _ {k} $,  
 +
one uses the backward differences:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023046.png" /></td> </tr></table>
+
$$
 +
\nabla y _ {k} \  = \
 +
f (x _ {k} ) -
 +
f (x _ {k - 1 }  ).
 +
$$
  
 
In a number of problems (in particular in the construction of interpolation formulas) central differences are used:
 
In a number of problems (in particular in the construction of interpolation formulas) central differences are used:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023047.png" /></td> </tr></table>
+
$$
 +
\delta ^ {2m - 1 } y _ {i + 1/2 }  ,\ \
 +
\delta  ^ {2m} y _ {i} ,
 +
$$
  
 
which are defined as follows:
 
which are defined as follows:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023048.png" /></td> </tr></table>
+
$$
 +
\delta y _ {i + 1/2 }  \  = \
 +
y _ {i + 1 }  - y _ {i} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023049.png" /></td> </tr></table>
+
$$
 +
\delta  ^ {2} y _ {i} \  = \  \delta y _ {i + 1/2 }  - \delta y _ {i - 1/2 }  ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023050.png" /></td> </tr></table>
+
$$
 +
{\dots \dots \dots \dots \dots }
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023051.png" /></td> </tr></table>
+
$$
 +
\delta ^ {2m - 1 } y _ {i +1/2 }  \  = \  \delta ^ {2m - 2 }
 +
y _ {i + 1 }  - \delta ^ {2m - 2 } y _ {i} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023052.png" /></td> </tr></table>
+
$$
 +
\delta  ^ {2m} y _ {i} \  = \  \delta ^ {2m - 1 } y _ {i +1/2 }  - \delta ^ {2m - 1 } y _ {i - 1/2 }  .
 +
$$
  
The central differences <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023053.png" /> and the ordinary differences <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023054.png" /> are related by the formula
+
The central differences $  \delta  ^ {n} y _ {l} $
 +
and the ordinary differences $  \Delta  ^ {n} y _ {k} $
 +
are related by the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023055.png" /></td> </tr></table>
+
$$
 +
\delta  ^ {2m} y _ {i} \  = \
 +
\Delta  ^ {2m} y _ {i - m }  ,\ \
 +
\delta ^ {2m + 1 }
 +
y _ {i + 1/2 }  \  = \
 +
\Delta ^ {2m + 1 }
 +
y _ {i - m }  .
 +
$$
  
In the case when the intervals <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023056.png" /> are not of constant size, one also considers divided differences:
+
In the case when the intervals $  ( x _ {k + 1 }  ,\  x _ {k} ) $
 +
are not of constant size, one also considers divided differences:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023057.png" /></td> </tr></table>
+
$$
 +
[ x _ {0} ] \  = \  y _ {0} ,\ \
 +
[ x _ {1} ] \  = \  y _ {1} ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023058.png" /></td> </tr></table>
+
$$
 +
[x _ {0} ; \  x _ {1} ] \  =
 +
\frac{y _ {0} - y _ {1} }{x _ {0} - x _ {1} }
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023059.png" /></td> </tr></table>
+
$$
 +
[x _ {0} ; \  x _ {1} ; \  x _ {2} ] \  =
 +
\frac{[x _ {0} ; \  x _ {1} ] - [x _ {1} ; \  x _ {2} ] }{x _ {0} - x _ {2} }
 +
,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023060.png" /></td> </tr></table>
+
$$
 +
{\dots \dots \dots \dots \dots \dots }
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023061.png" /></td> </tr></table>
+
$$
 +
[x _ {0} ; \dots ; \  x _ {n} ] \  =
 +
\frac{[x _ {0} ; \dots ; \
 +
x _ {n - 1 }  ] - [x _ {1} ; \dots ; \  x _ {n} ] }{x _ {0} - x _ {n} }
 +
.
 +
$$
  
 
The following formula holds:
 
The following formula holds:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023062.png" /></td> </tr></table>
+
$$
 +
[x _ {0} ; \dots ; \  x _ {n} ] \  = \
 +
\sum _ {j = 0 } ^ { n }
 +
 
 +
\frac{y _ {j} }{\prod _ {i \neq j } (x _ {j} - x _ {i} ) }
 +
.
 +
$$
 +
 
 +
Sometimes, instead of  $  [x _ {0} ; \dots ; \  x _ {n} ] $
 +
the notation  $  f (x _ {0} ; \dots ; \  x _ {n} ) $
 +
is used. If  $  x _ {n} = x _ {0} + nh $,
 +
$  n = 0,\  1 \dots $
 +
then
  
Sometimes, instead of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023063.png" /> the notation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023064.png" /> is used. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023065.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023066.png" /> then
+
$$
 +
[x _ {0} ; \dots ; \  x _ {n} ] \  = \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023067.png" /></td> </tr></table>
+
\frac{\Delta  ^ {n} y _ {0} }{n! h  ^ {n} }
 +
.
 +
$$
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023068.png" /> has an <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023069.png" />-th derivative <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023070.png" /> throughout the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023071.png" />, then
+
If f $
 +
has an $  n $-
 +
th derivative f ^ {\  (n) } $
 +
throughout the interval $  ( x _ {k} ,\  x _ {k + n }  ) $,  
 +
then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023072.png" /></td> </tr></table>
+
$$
 +
\Delta  ^ {n} y _ {k} \  = \
 +
h  ^ {n} f ^ {\  (n) } ( \xi ),\ \
 +
x _ {k} < \xi < x _ {k + n }  .
 +
$$
  
The calculus of finite differences is closely related to the general theory of approximation of functions, and is used in approximate differentiation and integration and in the approximate solution of differential equations, as well as in other questions. Suppose that the problem is posed (an interpolation problem) on recovering a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023073.png" /> from the known values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023074.png" /> at points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023075.png" />. One constructs the interpolation polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023076.png" /> of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023077.png" /> having the same values as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023078.png" /> at the indicated points. It can be expressed in various forms: in the form of Lagrange, Newton, etc. In Newton's form it is:
+
The calculus of finite differences is closely related to the general theory of approximation of functions, and is used in approximate differentiation and integration and in the approximate solution of differential equations, as well as in other questions. Suppose that the problem is posed (an interpolation problem) on recovering a function f $
 +
from the known values of f $
 +
at points $  x _ {0} \dots x _ {n} $.  
 +
One constructs the interpolation polynomial $  P $
 +
of degree $  n $
 +
having the same values as f $
 +
at the indicated points. It can be expressed in various forms: in the form of Lagrange, Newton, etc. In Newton's form it is:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023079.png" /></td> </tr></table>
+
$$
 +
P _ {n} (x) \  = \
 +
f (x _ {0} ) +
 +
[x _ {0} ; \  x _ {1} ]
 +
(x - x _ {0} ) +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023080.png" /></td> </tr></table>
+
$$
 +
+
 +
\dots + [x _ {0} ; \dots ; \  x _ {n} ] (x - x _ {0} ) \dots (x - x _ {n - 1 }  ),
 +
$$
  
 
and in the case when the values of the independent variable are equally spaced:
 
and in the case when the values of the independent variable are equally spaced:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023081.png" /></td> </tr></table>
+
$$
 +
P _ {n} (x) \  = \
 +
f (x _ {0} ) +
 +
\sum _ {k = 1 } ^ { n }
 +
 
 +
\frac{\Delta  ^ {k} f (x _ {0} ) }{k! h  ^ {k} }
 +
 
 +
(x - x _ {0} ) \dots
 +
(x - x _ {k - 1 }  ).
 +
$$
 +
 
 +
The function  $  f $
 +
is set approximately equal to  $  P _ {n} $.
 +
If  $  f $
 +
has a derivative of order  $  n + 1 $,
 +
then the error in replacing  $  f $
 +
by  $  P _ {n} $
 +
can be estimated by the formula
 +
 
 +
$$
 +
f (x) \  = \
 +
P _ {n} (x) +
 +
 
 +
\frac{f ^ {\  (n + 1) } ( \xi ) }{(n + 1)! }
 +
 
 +
(x - x _ {0} ) \dots
 +
(x - x _ {n} ),
 +
$$
 +
 
 +
where  $  \xi $
 +
lies in the interval in which the points  $  x,\  x _ {0} \dots x _ {n} $
 +
ly. If  $  f $
 +
is a polynomial of degree  $  \leq  n $,
 +
then  $  f = P _ {n} $.
  
The function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023082.png" /> is set approximately equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023083.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023084.png" /> has a derivative of order <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023085.png" />, then the error in replacing <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023086.png" /> by <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023087.png" /> can be estimated by the formula
+
As the number of nodes increases without bound, the interpolation polynomial  $  P _ {n} $
 +
becomes in the limit a polynomial  $  P $
 +
of "infinite" degree, and the following question naturally arises: When does  $  f = P $?
 +
In other words, when does the following equation hold:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023088.png" /></td> </tr></table>
+
$$ \tag{1 }
 +
f (x) \  = \
 +
f (x _ {0} ) +
 +
\sum _ {k = 1 } ^  \infty 
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023089.png" /> lies in the interval in which the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023090.png" /> ly. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023091.png" /> is a polynomial of degree <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023092.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023093.png" />.
+
\frac{\Delta  ^ {k} f (x _ {0} ) }{k! h  ^ {k} }
  
As the number of nodes increases without bound, the interpolation polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023094.png" /> becomes in the limit a polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023095.png" /> of "infinite"  degree, and the following question naturally arises: When does <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023096.png" />? In other words, when does the following equation hold:
+
(x - x _ {0} ) \dots
 +
(x - x _ {k - 1 } )
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023097.png" /></td> <td valign="top" style="width:5%;text-align:right;">(1)</td></tr></table>
+
(for simplicity, the case of equally-spaced intervals is considered)? Let  $  x _ {0} = 0 $,
 +
$  h = 1 $,
 +
so that  $  x _ {n} = n $(
 +
$  n \geq  0 $).  
 +
If the series (1) converges at a point  $  \alpha $
 +
that is not a node (the series (1) always converges at the nodes  $  x _ {0} ,\  x _ {1} ,\dots $),
 +
then it converges in the half-plane  $  \mathop{\rm Re} \  x > \alpha $
 +
and is an analytic function in this half-plane which satisfies the following condition in any half-plane  $  \mathop{\rm Re} \  x \geq  \beta > \alpha $(
 +
where  $  \epsilon > 0 $):
  
(for simplicity, the case of equally-spaced intervals is considered)? Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023098.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f04023099.png" />, so that <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230100.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230101.png" />). If the series (1) converges at a point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230102.png" /> that is not a node (the series (1) always converges at the nodes <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230103.png" />), then it converges in the half-plane <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230104.png" /> and is an analytic function in this half-plane which satisfies the following condition in any half-plane <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230105.png" /> (where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230106.png" />):
+
$$
 +
| f (re ^ {i \phi } ) | \  < \
 +
c e ^ {r h ( \phi ) }
 +
r ^ {\alpha + \epsilon + 1/2 } ,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230107.png" /></td> </tr></table>
+
$$
 +
h ( \phi ) \  = \  \cos \  \phi \  \mathop{\rm ln} (2 \  \cos \  \phi ) + \phi \  \sin \  \phi .
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230108.png" /></td> </tr></table>
+
Conversely, if  $  f $
 +
is analytic in some half-plane and has similar order of growth (or somewhat better than it), then it can be represented by the series (1). Thus, functions of a very narrow class (only the analytic functions of the indicated growth) can be expanded in a series (1) (a so-called Newton series). Newton series are studied when the nodes are general complex numbers. These series have found great application in transcendental number theory. Suppose next that the interpolation nodes form a triangular array
  
Conversely, if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230109.png" /> is analytic in some half-plane and has similar order of growth (or somewhat better than it), then it can be represented by the series (1). Thus, functions of a very narrow class (only the analytic functions of the indicated growth) can be expanded in a series (1) (a so-called Newton series). Newton series are studied when the nodes are general complex numbers. These series have found great application in transcendental number theory. Suppose next that the interpolation nodes form a triangular array
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230110.png" /></td> </tr></table>
+
\begin{array}{llll}
 +
x _ {0,0}  &{}  &{}  &{}  \\
 +
x _ {1,0}  &x _ {1,1}  &{}  &{}  \\
 +
\dots  &\dots  &\dots  &\dots  \\
 +
x _ {n,0}  &x _ {n,1}  &\dots  &x _ {n,n}  \\
 +
\dots  &\dots  &\dots  &\dots ,  \\
 +
\end{array}
  
and that the interpolation polynomial <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230111.png" /> is constructed from the nodes of the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230112.png" />-th row. The class of functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230113.png" /> for which <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230114.png" /> converges to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230115.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230116.png" /> depends on the array of nodes. E.g., if
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230117.png" /></td> </tr></table>
+
and that the interpolation polynomial  $  P _ {n} $
 +
is constructed from the nodes of the  $  (n + 1) $-
 +
th row. The class of functions  $  f $
 +
for which  $  P _ {n} $
 +
converges to  $  f $
 +
as  $  n \rightarrow \infty $
 +
depends on the array of nodes. E.g., if
  
(<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230118.png" /> are the roots of the [[Chebyshev polynomials|Chebyshev polynomials]]), then in order that the interpolation process converges on the interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230119.png" /> it suffices that the following condition holds:
+
$$
 +
x _ {n,k} \  = \  \cos \
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230120.png" /></td> </tr></table>
+
\frac{2k + 1 }{2n }
 +
\pi ,\ \
 +
k = 0 \dots n
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230121.png" /> is the modulus of continuity (cf. [[Continuity, modulus of|Continuity, modulus of]]) of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230122.png" /> on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230123.png" />.
+
( $  x _ {n,k} $
 +
are the roots of the [[Chebyshev polynomials|Chebyshev polynomials]]), then in order that the interpolation process converges on the interval  $  [-1 ,\  1] $
 +
it suffices that the following condition holds:
  
Another important problem in the calculus of finite differences is the problem of the summation of functions. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230124.png" /> be a given function. It is required to find in finite form, exact or approximate, the sum
+
$$
 +
\lim\limits _ {n \rightarrow \infty } \
 +
\omega \left (
 +
{
 +
\frac{1}{n}
 +
} \right ) \
 +
\mathop{\rm ln} \  n \  = 0,
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230125.png" /></td> </tr></table>
+
where  $  \omega ( \delta ) $
 +
is the modulus of continuity (cf. [[Continuity, modulus of|Continuity, modulus of]]) of  $  f $
 +
on  $  [-1,\  1] $.
  
for fixed <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230126.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230127.png" /> and large <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230128.png" />, when certain analytic properties of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230129.png" /> are known. In other words, the asymptotic behaviour of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230130.png" /> as <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230131.png" /> is studied. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230132.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230133.png" /> (for simplicity) and suppose that a function <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230134.png" /> can be found such that
+
Another important problem in the calculus of finite differences is the problem of the summation of functions. Let f $
 +
be a given function. It is required to find in finite form, exact or approximate, the sum
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230135.png" /></td> <td valign="top" style="width:5%;text-align:right;">(2)</td></tr></table>
+
$$
 +
S _ {n} \  = \
 +
f (x _ {0} ) +
 +
f (x _ {0} + h) + \dots +
 +
f (x _ {0} + nh),
 +
$$
 +
 
 +
for fixed  $  x _ {0} $,
 +
$  h $
 +
and large  $  n $,
 +
when certain analytic properties of  $  f $
 +
are known. In other words, the asymptotic behaviour of  $  S _ {n} $
 +
as  $  n \rightarrow \infty $
 +
is studied. Let  $  x _ {0} = 0 $,
 +
$  h = 1 $(
 +
for simplicity) and suppose that a function  $  F $
 +
can be found such that
 +
 
 +
$$ \tag{2 }
 +
\Delta F (x) \  = \
 +
F (x + 1) -
 +
F (x) \  = f (x).
 +
$$
  
 
Then
 
Then
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230136.png" /></td> <td valign="top" style="width:5%;text-align:right;">(3)</td></tr></table>
+
$$ \tag{3 }
 +
S _ {n} \  = \
 +
F (n + 1) - F (0).
 +
$$
  
E.g., let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230137.png" />. The solution of (2) is sought in the form of a polynomial of degree three,
+
E.g., let $  f (x) = x  ^ {2} $.  
 +
The solution of (2) is sought in the form of a polynomial of degree three,
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230138.png" /></td> </tr></table>
+
$$
 +
Q (x) \  = \
 +
ax  ^ {3} + bx  ^ {2} + cx ,
 +
$$
  
with undetermined coefficients. On substituting into equation (2) and equating coefficients of corresponding powers of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230139.png" /> on the left-hand and right-hand sides, the polynomial has the form:
+
with undetermined coefficients. On substituting into equation (2) and equating coefficients of corresponding powers of $  x $
 +
on the left-hand and right-hand sides, the polynomial has the form:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230140.png" /></td> </tr></table>
+
$$
 +
Q (x) \  = \
 +
{
 +
\frac{x  ^ {3} }{3}
 +
} -
 +
{
 +
\frac{x  ^ {2} }{2}
 +
} +
 +
{
 +
\frac{x}{6}
 +
} \  = \
 +
{
 +
\frac{x (x - 1) (2x - 1) }{6}
 +
}
 +
$$
  
 
and
 
and
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230141.png" /></td> </tr></table>
+
$$
 +
{
 +
\frac{n (n + 1) (2n + 1) }{6}
 +
}
 +
= \  1  ^ {2} + 2  ^ {2} + \dots + n  ^ {2} .
 +
$$
  
The solution to equation (2) cannot always be obtained in finite form. It is therefore useful to have approximate formulas for the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230142.png" />. Such a formula is the [[Euler–MacLaurin formula|Euler–MacLaurin formula]]. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230143.png" /> has <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230144.png" /> derivatives and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230145.png" /> is even, the Euler–MacLaurin formula can be written as
+
The solution to equation (2) cannot always be obtained in finite form. It is therefore useful to have approximate formulas for the $  S _ {n} $.  
 +
Such a formula is the [[Euler–MacLaurin formula|Euler–MacLaurin formula]]. If f $
 +
has $  k $
 +
derivatives and $  k $
 +
is even, the Euler–MacLaurin formula can be written as
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230146.png" /></td> </tr></table>
+
$$
 +
\sum _ {x = 0 } ^ { {n }  - 1 }
 +
f (x) \  = \
 +
\int\limits _ { 0 } ^ { n }
 +
f (x) \  dx +
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230147.png" /></td> </tr></table>
+
$$
 +
+
 +
\sum _ {\nu = 1 } ^ { {k }  - 1 }
 +
\frac{B _  \nu  }{\nu ! }
 +
\{ f ^ {\  ( \nu - 1) } (n) - f ^ {\  ( \nu - 1) } (0) \} +
 +
\frac{B _ {k} }{k! }
 +
\sum
 +
_ {x = 0 } ^ { {n }  - 1 } f ^ {\  (k) } (x + \theta ),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230148.png" /> (in general <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230149.png" /> depends on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230150.png" />), and the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230151.png" /> are the [[Bernoulli numbers|Bernoulli numbers]]. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230152.png" /> is a polynomial of degree less than <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230153.png" />, the remainder term vanishes.
+
where $  0 < \theta < 1 $(
 +
in general $  \theta $
 +
depends on $  n $),  
 +
and the $  B _  \nu  $
 +
are the [[Bernoulli numbers|Bernoulli numbers]]. If f $
 +
is a polynomial of degree less than $  k $,  
 +
the remainder term vanishes.
  
 
There is a similarity between the problems of the calculus of finite differences and those of differential and integral calculus. The operation of finding the difference corresponds to that of finding the derivative; the solution of equation (2), which, as an operation, is the inverse of finding the finite difference, corresponds to finding a primitive, that is, an indefinite integral. Formula (3) is a direct analogue of the [[Newton–Leibniz formula|Newton–Leibniz formula]]. This similarity emerges in considering finite-difference equations. By a finite-difference equation is meant a relation
 
There is a similarity between the problems of the calculus of finite differences and those of differential and integral calculus. The operation of finding the difference corresponds to that of finding the derivative; the solution of equation (2), which, as an operation, is the inverse of finding the finite difference, corresponds to finding a primitive, that is, an indefinite integral. Formula (3) is a direct analogue of the [[Newton–Leibniz formula|Newton–Leibniz formula]]. This similarity emerges in considering finite-difference equations. By a finite-difference equation is meant a relation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230154.png" /></td> </tr></table>
+
$$
 +
F (x,\  f (x),\
 +
\Delta f (x) \dots
 +
\Delta  ^ {n} f (x)) \  = 0,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230155.png" /> is a given function and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230156.png" /> is the required function. If all the <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230157.png" /> are expressed in terms of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230158.png" />, then the finite-difference equation is written in the form
+
where $  F $
 +
is a given function and f $
 +
is the required function. If all the $  \Delta  ^ {n} f (x) $
 +
are expressed in terms of $  f (x),\  f (x + 1) \dots f (x + n) $,  
 +
then the finite-difference equation is written in the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230159.png" /></td> <td valign="top" style="width:5%;text-align:right;">(4)</td></tr></table>
+
$$ \tag{4 }
 +
\Phi (x,\  f (x),\
 +
f (x + 1) \dots
 +
f (x + n)) \  = 0.
 +
$$
  
Its solution with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230160.png" /> is:
+
Its solution with respect to f (x + n) $
 +
is:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230161.png" /></td> <td valign="top" style="width:5%;text-align:right;">(5)</td></tr></table>
+
$$ \tag{5 }
 +
f (x + n) \  = \
 +
\psi (x,\  f (x) \dots
 +
f (x + n - 1)).
 +
$$
  
For given initial values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230162.png" /> one can successively find <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230163.png" />, etc. After solving (4) with respect to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230164.png" />:
+
For given initial values $  f (x _ {0} ) \dots f (x _ {0} + n - 1) $
 +
one can successively find $  f (x _ {0} + n),\  f (x _ {0} + n + 1) $,  
 +
etc. After solving (4) with respect to f (x) $:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230165.png" /></td> </tr></table>
+
$$
 +
f (x) \  = \
 +
\phi (x,\  f (x + 1) \dots
 +
f (x + n)),
 +
$$
  
one can, by setting <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230166.png" />, find <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230167.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230168.png" />, etc. Thus, from the equation in terms of the initial data one can find the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230169.png" /> at all the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230170.png" />, where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230171.png" /> is an integer. Consider, for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230172.png" /> the linear equation
+
one can, by setting $  x = x _ {0} - 1 $,  
 +
find $  f (x _ {0} - 1) $,  
 +
then $  f (x _ {0} - 2) $,  
 +
etc. Thus, from the equation in terms of the initial data one can find the values of f $
 +
at all the points $  x _ {0} + k $,  
 +
where $  k $
 +
is an integer. Consider, for $  x = 0,\  1 \dots $
 +
the linear equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230173.png" /></td> <td valign="top" style="width:5%;text-align:right;">(6)</td></tr></table>
+
$$ \tag{6 }
 +
f (x + k) +
 +
P _ {1} (x)
 +
f (x + k - 1) + \dots +
 +
P _ {k} (x) f (x) \  = \
 +
Q (x),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230174.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230175.png" /> are given functions on the set <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230176.png" />. The general solution of the inhomogeneous equation (6) is the sum of a particular solution of the inhomogeneous equation and the general solution of the homogeneous equation
+
where $  P _ {1} (x) \dots P _ {k} (x) $
 +
and $  Q (x) $
 +
are given functions on the set $  x = 0,\  1 ,\dots $.  
 +
The general solution of the inhomogeneous equation (6) is the sum of a particular solution of the inhomogeneous equation and the general solution of the homogeneous equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230177.png" /></td> <td valign="top" style="width:5%;text-align:right;">(7)</td></tr></table>
+
$$ \tag{7 }
 +
f (x + k) +
 +
P _ {1} (x)
 +
f (x + k - 1) + \dots +
 +
P _ {k} (x) f (x) \  = 0.
 +
$$
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230178.png" /> are linearly independent solutions of (7), then the general solution of (7) is given by the formula
+
If f _ {1} \dots f _ {k} $
 +
are linearly independent solutions of (7), then the general solution of (7) is given by the formula
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230179.png" /></td> </tr></table>
+
$$
 +
f (x) \  = \
 +
c _ {1} f _ {1} (x) + \dots +
 +
c _ {k} f _ {k} (x),
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230180.png" /> are arbitrary constants. The constants <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230181.png" /> can be found by prescribing the initial conditions, that is, the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230182.png" />. Linearly independent solutions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230183.png" /> (a fundamental system) are easily found in the case of an equation with constant coefficients:
+
where $  c _ {1} \dots c _ {k} $
 +
are arbitrary constants. The constants $  c _ {1} \dots c _ {k} $
 +
can be found by prescribing the initial conditions, that is, the values of $  f (0) \dots f (k - 1) $.  
 +
Linearly independent solutions $  f _ {1} \dots f _ {k} $(
 +
a fundamental system) are easily found in the case of an equation with constant coefficients:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230184.png" /></td> <td valign="top" style="width:5%;text-align:right;">(8)</td></tr></table>
+
$$ \tag{8 }
 +
f (x + k) +
 +
a _ {1} f (x + k - 1) + \dots +
 +
a _ {k} f (x) \  = 0.
 +
$$
  
The solution of (8) is sought in the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230185.png" />. The characteristic equation for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230186.png" /> is:
+
The solution of (8) is sought in the form $  f (x) = \lambda  ^ {x} $.  
 +
The characteristic equation for $  \lambda $
 +
is:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230187.png" /></td> </tr></table>
+
$$
 +
\lambda  ^ {k} +
 +
a _ {1} \lambda ^ {k - 1 } + \dots +
 +
a _ {k} \  = 0.
 +
$$
  
Suppose that its roots <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230188.png" /> are all distinct. Then the system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230189.png" /> is a fundamental system of solutions of (8) and the general solution of (8) can be represented by the formula:
+
Suppose that its roots $  \lambda _ {1} \dots \lambda _ {k} $
 +
are all distinct. Then the system $  \lambda _ {1}  ^ {x} \dots \lambda _ {k}  ^ {x} $
 +
is a fundamental system of solutions of (8) and the general solution of (8) can be represented by the formula:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230190.png" /></td> </tr></table>
+
$$
 +
f (x) \  = \
 +
c _ {1} \lambda _ {1}  ^ {x} + \dots +
 +
c _ {k} \lambda _ {k}  ^ {x} .
 +
$$
  
If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230191.png" /> is a root of the characteristic equation of multiplicity <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230192.png" />, then the particular solutions corresponding to it are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230193.png" />.
+
If $  \lambda _ {1} $
 +
is a root of the characteristic equation of multiplicity $  s $,  
 +
then the particular solutions corresponding to it are $  \lambda _ {1} ,\  x \lambda _ {1} \dots x ^ {s - 1 } \lambda _ {1} $.
  
Suppose, for example, that one considers the sequence of numbers starting with 0 and 1 in which every successive number is equal to the sum of its two immediate predecessors: 0, 1, 1, 2, 3, 5, 8, 13<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230194.png" /> (the [[Fibonacci numbers|Fibonacci numbers]]). One seeks an expression for the general term of the sequence. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230195.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230196.png" /> be the general term of the sequence; the conditions
+
Suppose, for example, that one considers the sequence of numbers starting with 0 and 1 in which every successive number is equal to the sum of its two immediate predecessors: 0, 1, 1, 2, 3, 5, 8, 13 $  ,\dots $(
 +
the [[Fibonacci numbers|Fibonacci numbers]]). One seeks an expression for the general term of the sequence. Let f (x) $,  
 +
$  x = 0,\  1 \dots $
 +
be the general term of the sequence; the conditions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230197.png" /></td> </tr></table>
+
$$
 +
f (x + 2) \  = \
 +
f (x + 1) + f (x),\ \
 +
f (0) \  = 0,\ \
 +
f (1) \  = \  1,
 +
$$
  
form a difference equation with given initial conditions. The characteristic equation has the form <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230198.png" />, its roots being <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230199.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230200.png" />. Therefore
+
form a difference equation with given initial conditions. The characteristic equation has the form $  \lambda  ^ {2} - \lambda - 1 = 0 $,  
 +
its roots being $  \lambda _ {1} = (1 + \sqrt 5 )/2 $,  
 +
$  \lambda _ {2} = (1 - \sqrt 5 )/2 $.  
 +
Therefore
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230201.png" /></td> </tr></table>
+
$$
 +
f (x) \  = \  c _ {1} \left (
 +
{
 +
\frac{1 + \sqrt 5 }{2}
 +
}
 +
\right )  ^ {x} +
 +
c _ {2} \left (
 +
{
 +
\frac{1 - \sqrt 5 }{2}
 +
}
 +
\right )  ^ {x} ;
 +
$$
  
<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230202.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230203.png" /> being found from the initial conditions.
+
$  c _ {1} = 1/ \sqrt 5 $
 +
and $  c _ {2} = - 1/ \sqrt 5 $
 +
being found from the initial conditions.
  
Equation (4) can be found not only when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230204.png" /> varies discretely, taking values <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230205.png" /> but also when <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230206.png" /> varies continuously. Let <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230207.png" /> be arbitrarily defined in the half-open interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230208.png" />. From (5) one obtains <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230209.png" /> by setting <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230210.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230211.png" /> is continuously defined in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230212.png" />, then <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230213.png" /> may prove to be discontinuous in the closed interval <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230214.png" />. If one wishes to deal with continuous solutions, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230215.png" /> needs to be defined on <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230216.png" /> in such a way that by virtue of (5) it proves to be continuous in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230217.png" />. Knowledge of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230218.png" /> in <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230219.png" /> enables one to find from (5) <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230220.png" /> for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230221.png" />, then for <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230222.png" />, etc.
+
Equation (4) can be found not only when $  x $
 +
varies discretely, taking values 0,\  1 \dots $
 +
but also when $  x $
 +
varies continuously. Let f $
 +
be arbitrarily defined in the half-open interval $  [0,\  n) $.  
 +
From (5) one obtains f (n) $
 +
by setting $  x = 0 $.  
 +
If f $
 +
is continuously defined in $  [0,\  n) $,
 +
then f $
 +
may prove to be discontinuous in the closed interval $  [0,\  n] $.  
 +
If one wishes to deal with continuous solutions, f $
 +
needs to be defined on $  [0,\  n) $
 +
in such a way that by virtue of (5) it proves to be continuous in $  [0,\  n] $.  
 +
Knowledge of f $
 +
in $  [0,\  n] $
 +
enables one to find from (5) f (x) $
 +
for $  x \in (n,\  n + 1] $,  
 +
then for $  x \in (n + 1,\  n + 2] $,  
 +
etc.
  
 
More general than (8) is the equation
 
More general than (8) is the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230223.png" /></td> <td valign="top" style="width:5%;text-align:right;">(9)</td></tr></table>
+
$$ \tag{9 }
 +
f (x + h _ {k} ) +
 +
a _ {1} f (x + h _ {k - 1 }  ) + \dots +
 +
a _ {k} f (x) \  = 0.
 +
$$
  
Here <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230224.png" /> are not necessarily integers and are not necessarily commensurable relative to one another. Equation (9) has the particular solutions
+
Here $  0 < h _ {1} < \dots < h _ {k} $
 +
are not necessarily integers and are not necessarily commensurable relative to one another. Equation (9) has the particular solutions
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230225.png" /></td> </tr></table>
+
$$
 +
f (x) \  = \  e ^ {\lambda x } ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230226.png" /> is a root of the equation
+
where $  \lambda $
 +
is a root of the equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230227.png" /></td> </tr></table>
+
$$
 +
L ( \lambda ) \  \equiv \
 +
e ^ {h _ {k} \lambda } +
 +
a _ {1} e ^ {h _ {k - 1 }  \lambda } + \dots +
 +
a _ {k} \  = 0.
 +
$$
  
This equation has an infinite number of roots <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230228.png" />. Consequently, (9) has an infinite number of particular solutions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230229.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230230.png" />. Suppose that all the roots are simple. To express the solutions of (9) in terms of these elementary particular solutions, it is convenient to write the equation in the form:
+
This equation has an infinite number of roots $  \lambda _ {1} ,\  \lambda _ {2} ,\dots $.  
 +
Consequently, (9) has an infinite number of particular solutions $  e ^ {\lambda _ {m} x } $,
 +
$  m = 1,\  2 ,\dots $.  
 +
Suppose that all the roots are simple. To express the solutions of (9) in terms of these elementary particular solutions, it is convenient to write the equation in the form:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230231.png" /></td> <td valign="top" style="width:5%;text-align:right;">(10)</td></tr></table>
+
$$ \tag{10 }
 +
\int\limits _ { 0 } ^  \alpha 
 +
f (x + t) \  d \sigma (t) \  = \  0,\ \
 +
\alpha = h _ {k} ,
 +
$$
  
where <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230232.png" /> is a step function having jumps at the points <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230233.png" />, equal to <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230234.png" />, respectively. Let
+
where $  \sigma (t) $
 +
is a step function having jumps at the points 0,\  h _ {1} \dots h _ {k} $,
 +
equal to $  a _ {k} ,\  a _ {k-1} \dots 1 $,  
 +
respectively. Let
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230235.png" /></td> </tr></table>
+
$$
 +
\psi _  \nu  (t) \  = \
  
The functions <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230236.png" /> have the property:
+
\frac{- e ^ {\lambda _  \nu  t } }{L  ^  \prime  ( \lambda _  \nu  ) }
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230237.png" /></td> </tr></table>
+
\int\limits _ { 0 } ^ { t }
 +
e ^ {\lambda _  \nu  \xi } \
 +
d \sigma ( \xi ),\ \
 +
\nu \geq  1.
 +
$$
  
(<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230238.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230239.png" />, and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230240.png" /> if <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230241.png" />), that is, they form a system that is bi-orthogonal to the system <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230242.png" />. On this basis, the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230243.png" /> of (10) corresponds to the series
+
The functions  $  \psi _  \nu  (t) $
 +
have the property:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230244.png" /></td> <td valign="top" style="width:5%;text-align:right;">(11)</td></tr></table>
+
$$
 +
\int\limits _ { 0 } ^  \alpha 
 +
e ^ {\lambda _ {m} t }
 +
\psi _  \nu  (t) \
 +
dt \  = \  \delta _ {m \nu }
 +
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230245.png" /></td> </tr></table>
+
( $  \delta _ {m \nu }  = 1 $
 +
if  $  m = \nu $,
 +
and  $  \delta _ {m \nu }  = 0 $
 +
if  $  m \neq \nu $),
 +
that is, they form a system that is bi-orthogonal to the system  $  \{ e ^ {\lambda _ {m} t } \} $.
 +
On this basis, the solution  $  f $
 +
of (10) corresponds to the series
 +
 
 +
$$ \tag{11 }
 +
f (x) \  \sim \
 +
\sum _ {\nu = 1 } ^  \infty 
 +
c _  \nu  e ^ {\lambda _  \nu  x } ,
 +
$$
 +
 
 +
$$
 +
c _  \nu  \  = \  \int\limits _ { 0 } ^  \alpha  f (t) \psi _  \nu  (t) \  dt.
 +
$$
  
 
If (9) has the form
 
If (9) has the form
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230246.png" /></td> <td valign="top" style="width:5%;text-align:right;">(12)</td></tr></table>
+
$$ \tag{12 }
 +
f (x + 2 \pi ) - f (x) \  = \  0
 +
$$
  
(that is, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230247.png" /> is a periodic function with period <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230248.png" />); <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230249.png" />; the roots of the equation <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230250.png" /> are <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230251.png" /> (<img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230252.png" />); and (11) is the Fourier series of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230253.png" /> in complex form. The series (11) can be regarded as a generalization to the case of the difference equation (9) of the ordinary Fourier series corresponding to the simplest difference equation (12). Under certain conditions the series (11) converges to the solution <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230254.png" />. If <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230255.png" /> is an analytic function, then (9) is expressible in the form of an [[Equation of infinite order|equation of infinite order]]
+
(that is, f $
 +
is a periodic function with period $  2 \pi $);  
 +
$  L (x) = e ^ {2 \pi x } - 1 $;  
 +
the roots of the equation $  L ( \lambda ) = 0 $
 +
are $  mi $(
 +
$  m = 0,\  \pm 1 ,\dots $);  
 +
and (11) is the Fourier series of f $
 +
in complex form. The series (11) can be regarded as a generalization to the case of the difference equation (9) of the ordinary Fourier series corresponding to the simplest difference equation (12). Under certain conditions the series (11) converges to the solution f $.  
 +
If f $
 +
is an analytic function, then (9) is expressible in the form of an [[Equation of infinite order|equation of infinite order]]
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230256.png" /></td> </tr></table>
+
$$
 +
\sum _ {n = 0 } ^  \infty 
 +
\alpha _ {n} f ^ {\  (n) } (x) \  = 0.
 +
$$
  
 
Differences of functions of several variables are introduced by analogy with differences of functions of one variable. For example, suppose that it is required to solve the problem of finding numerically the solution of the Laplace equation
 
Differences of functions of several variables are introduced by analogy with differences of functions of one variable. For example, suppose that it is required to solve the problem of finding numerically the solution of the Laplace equation
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230257.png" /></td> </tr></table>
+
$$
 +
 
 +
\frac{\partial  ^ {2} u }{\partial  x  ^ {2} }
 +
+
 +
 
 +
\frac{\partial  ^ {2} u }{\partial  y  ^ {2} }
 +
= 0
 +
$$
  
in the rectangle <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230258.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230259.png" />, for given values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230260.png" /> on the boundary of the rectangle. The rectangle is divided into small rectangular cells with sides <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230261.png" />, <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230262.png" />. The values of the solution are sought at the vertices of these cells. At the vertices that lie on the boundary of the original rectangle, the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230263.png" /> are known. By applying the approximation (where second-order differences are in the numerators)
+
in the rectangle 0 \leq  x \leq  a $,  
 +
$  0 < y < b $,  
 +
for given values of $  u $
 +
on the boundary of the rectangle. The rectangle is divided into small rectangular cells with sides $  \Delta x = a/N $,  
 +
$  \Delta y = b/M $.  
 +
The values of the solution are sought at the vertices of these cells. At the vertices that lie on the boundary of the original rectangle, the values of $  u $
 +
are known. By applying the approximation (where second-order differences are in the numerators)
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230264.png" /></td> </tr></table>
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230265.png" /></td> </tr></table>
+
\frac{\partial  ^ {2} u }{\partial  x  ^ {2} }
 +
\  \approx \
 +
 
 +
\frac{u (x + \Delta x,\  y) - 2u (x,\  y) + u (x - \Delta x,\  y) }{\Delta x  ^ {2} }
 +
,
 +
$$
 +
 
 +
$$
 +
 
 +
\frac{\partial  ^ {2} u }{\partial  y  ^ {2} }
 +
\  \approx \ 
 +
\frac{u (x,\  y
 +
+ \Delta y) - 2u (x,\  y) + u (x,\  y - \Delta y) }{\Delta y  ^ {2} }
 +
,
 +
$$
  
 
one obtains instead of the Laplace equation the following system of equations:
 
one obtains instead of the Laplace equation the following system of equations:
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230266.png" /></td> </tr></table>
+
$$
  
<table class="eq" style="width:100%;"> <tr><td valign="top" style="width:94%;text-align:center;"><img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230267.png" /></td> </tr></table>
+
\frac{u (x + \Delta x,\  y) - 2u (x,\  y) + u (x - \Delta x,\  y) }{\Delta x  ^ {2} }
 +
+
 +
$$
  
The point <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230268.png" /> runs through those vertices of cells that are situated in the interior of the original rectangle. Thus, a system of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230269.png" /> equations is constructed containing the same number of unknowns. By solving this algebraic system of equations, the values of <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230270.png" /> at the vertices of the rectangles are obtained. When <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230271.png" /> and <img align="absmiddle" border="0" src="https://www.encyclopediaofmath.org/legacyimages/f/f040/f040230/f040230272.png" /> are small and the solution of the problem has a specified smoothness, the values so obtained are close to the exact values.
+
$$
 +
+
 +
 
 +
\frac{u (x,\  y + \Delta y) - 2u (x,\  y) + u
 +
(x,\  y - \Delta y) }{\Delta y  ^ {2} }
 +
= 0.
 +
$$
 +
 
 +
The point  $  (x,\  y) $
 +
runs through those vertices of cells that are situated in the interior of the original rectangle. Thus, a system of $  (N - 1) (M - 1) $
 +
equations is constructed containing the same number of unknowns. By solving this algebraic system of equations, the values of $  u $
 +
at the vertices of the rectangles are obtained. When $  \Delta x $
 +
and $  \Delta y $
 +
are small and the solution of the problem has a specified smoothness, the values so obtained are close to the exact values.
  
 
The calculus of finite differences was developed in parallel with that of the main branches of mathematical analysis. The calculus of finite differences first began to appear in works of P. Fermat, I. Barrow and G. Leibniz. In the 18th century it acquired the status of an independent mathematical discipline. The first systematic account of the calculus of finite differences was given by B. Taylor in 1715. The researches of mathematicians of the 19th century prepared the ground for the modern branches of the calculus of finite differences. The ideas and methods of the calculus of finite differences have been considerably developed in their application to analytic functions of a complex variable and to problems of numerical mathematics.
 
The calculus of finite differences was developed in parallel with that of the main branches of mathematical analysis. The calculus of finite differences first began to appear in works of P. Fermat, I. Barrow and G. Leibniz. In the 18th century it acquired the status of an independent mathematical discipline. The first systematic account of the calculus of finite differences was given by B. Taylor in 1715. The researches of mathematicians of the 19th century prepared the ground for the modern branches of the calculus of finite differences. The ideas and methods of the calculus of finite differences have been considerably developed in their application to analytic functions of a complex variable and to problems of numerical mathematics.
Line 257: Line 790:
 
====References====
 
====References====
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.A. Markov,  "The calculus of finite differences" , Odessa  (1910)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.O. [A.O. Gel'fond] Gelfond,  "Differenzenrechnung" , Deutsch. Verlag Wissenschaft.  (1958)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR></table>
 
<table><TR><TD valign="top">[1]</TD> <TD valign="top">  A.A. Markov,  "The calculus of finite differences" , Odessa  (1910)  (In Russian)</TD></TR><TR><TD valign="top">[2]</TD> <TD valign="top">  I.S. Berezin,  N.P. Zhidkov,  "Computing methods" , Pergamon  (1973)  (Translated from Russian)</TD></TR><TR><TD valign="top">[3]</TD> <TD valign="top">  A.O. [A.O. Gel'fond] Gelfond,  "Differenzenrechnung" , Deutsch. Verlag Wissenschaft.  (1958)  (Translated from Russian)</TD></TR><TR><TD valign="top">[4]</TD> <TD valign="top">  N.S. Bakhvalov,  "Numerical methods: analysis, algebra, ordinary differential equations" , MIR  (1977)  (Translated from Russian)</TD></TR></table>
 
 
  
 
====Comments====
 
====Comments====

Revision as of 11:44, 10 February 2020


A branch of mathematics in which functions are studied under a discrete change of the argument, as opposed to differential and integral calculus, where the argument changes continuously. Let $ f $ be a function defined at the points $ x _ {k} = x _ {0} + kh $( where $ h $ is a constant and $ k $ is an integer). Then

$$ \Delta y _ {k} \ = \ \Delta f _ {k} \ = \ f (x _ {k + 1 } ) - f (x _ {k} ) $$

are the (finite) first-order differences,

$$ \Delta ^ {2} y _ {k} \ = \ \Delta ^ {2} f _ {k} \ = \ \Delta f _ {k + 1 } - \Delta f _ {k} \ = \ f (x _ {k + 2 } ) - 2f (x _ {k + 1 } ) + f (x _ {k} ) $$

are the second-order differences $ \dots $

$$ \Delta ^ {n} y _ {k} \ = \ \Delta ^ {n} f _ {k} \ = \ \Delta ^ {n - 1 } f _ {k + 1 } - \Delta ^ {n - 1 } f _ {k} $$

are the $ n $- th order differences. The differences are conveniently arranged in a table:

<tbody> </tbody>
$ x $ $ y $ $ \Delta y $ $ \Delta ^ {2} y $ $ \Delta ^ {3} y $ $ \dots $
$ x _ {0} $ $ y _ {0} $
$ \Delta y _ {0} $
$ x _ {1} $ $ y _ {1} $ $ \Delta ^ {2} y _ {0} $
$ \Delta y _ {1} $ $ \Delta ^ {3} y _ {0} $
$ x _ {2} $ $ y _ {2} $ $ \Delta ^ {2} y _ {1} $ $ \dots $
$ \Delta y _ {2} $ $ \Delta ^ {3} y _ {1} $
$ x _ {3} $ $ y _ {3} $ $ \Delta ^ {2} y _ {2} $
$ \dots $ $ \dots $
$ x _ {4} $ $ y _ {4} $ $ \dots $
$ \dots $ $ \dots $ $ \dots $

An $ n $- th order difference can be expressed in terms of the quantities $ y _ {0} ,\ y _ {1} \dots $ by the formula

$$ \Delta ^ {n} y _ {k} \ = \ y _ {k + n } - \left ( \begin{array}{c} n \\ 1 \end{array} \right ) y _ {k + n - 1 } + \left ( \begin{array}{c} n \\ 2 \end{array} \right ) y _ {k + n - 2 } - \dots + (-1) ^ {n} y _ {k} . $$

As well as the forward differences $ \Delta y _ {k} $, one uses the backward differences:

$$ \nabla y _ {k} \ = \ f (x _ {k} ) - f (x _ {k - 1 } ). $$

In a number of problems (in particular in the construction of interpolation formulas) central differences are used:

$$ \delta ^ {2m - 1 } y _ {i + 1/2 } ,\ \ \delta ^ {2m} y _ {i} , $$

which are defined as follows:

$$ \delta y _ {i + 1/2 } \ = \ y _ {i + 1 } - y _ {i} , $$

$$ \delta ^ {2} y _ {i} \ = \ \delta y _ {i + 1/2 } - \delta y _ {i - 1/2 } , $$

$$ {\dots \dots \dots \dots \dots } $$

$$ \delta ^ {2m - 1 } y _ {i +1/2 } \ = \ \delta ^ {2m - 2 } y _ {i + 1 } - \delta ^ {2m - 2 } y _ {i} , $$

$$ \delta ^ {2m} y _ {i} \ = \ \delta ^ {2m - 1 } y _ {i +1/2 } - \delta ^ {2m - 1 } y _ {i - 1/2 } . $$

The central differences $ \delta ^ {n} y _ {l} $ and the ordinary differences $ \Delta ^ {n} y _ {k} $ are related by the formula

$$ \delta ^ {2m} y _ {i} \ = \ \Delta ^ {2m} y _ {i - m } ,\ \ \delta ^ {2m + 1 } y _ {i + 1/2 } \ = \ \Delta ^ {2m + 1 } y _ {i - m } . $$

In the case when the intervals $ ( x _ {k + 1 } ,\ x _ {k} ) $ are not of constant size, one also considers divided differences:

$$ [ x _ {0} ] \ = \ y _ {0} ,\ \ [ x _ {1} ] \ = \ y _ {1} , $$

$$ [x _ {0} ; \ x _ {1} ] \ = \ \frac{y _ {0} - y _ {1} }{x _ {0} - x _ {1} } , $$

$$ [x _ {0} ; \ x _ {1} ; \ x _ {2} ] \ = \ \frac{[x _ {0} ; \ x _ {1} ] - [x _ {1} ; \ x _ {2} ] }{x _ {0} - x _ {2} } , $$

$$ {\dots \dots \dots \dots \dots \dots } $$

$$ [x _ {0} ; \dots ; \ x _ {n} ] \ = \ \frac{[x _ {0} ; \dots ; \ x _ {n - 1 } ] - [x _ {1} ; \dots ; \ x _ {n} ] }{x _ {0} - x _ {n} } . $$

The following formula holds:

$$ [x _ {0} ; \dots ; \ x _ {n} ] \ = \ \sum _ {j = 0 } ^ { n } \frac{y _ {j} }{\prod _ {i \neq j } (x _ {j} - x _ {i} ) } . $$

Sometimes, instead of $ [x _ {0} ; \dots ; \ x _ {n} ] $ the notation $ f (x _ {0} ; \dots ; \ x _ {n} ) $ is used. If $ x _ {n} = x _ {0} + nh $, $ n = 0,\ 1 \dots $ then

$$ [x _ {0} ; \dots ; \ x _ {n} ] \ = \ \frac{\Delta ^ {n} y _ {0} }{n! h ^ {n} } . $$

If $ f $ has an $ n $- th derivative $ f ^ {\ (n) } $ throughout the interval $ ( x _ {k} ,\ x _ {k + n } ) $, then

$$ \Delta ^ {n} y _ {k} \ = \ h ^ {n} f ^ {\ (n) } ( \xi ),\ \ x _ {k} < \xi < x _ {k + n } . $$

The calculus of finite differences is closely related to the general theory of approximation of functions, and is used in approximate differentiation and integration and in the approximate solution of differential equations, as well as in other questions. Suppose that the problem is posed (an interpolation problem) on recovering a function $ f $ from the known values of $ f $ at points $ x _ {0} \dots x _ {n} $. One constructs the interpolation polynomial $ P $ of degree $ n $ having the same values as $ f $ at the indicated points. It can be expressed in various forms: in the form of Lagrange, Newton, etc. In Newton's form it is:

$$ P _ {n} (x) \ = \ f (x _ {0} ) + [x _ {0} ; \ x _ {1} ] (x - x _ {0} ) + $$

$$ + \dots + [x _ {0} ; \dots ; \ x _ {n} ] (x - x _ {0} ) \dots (x - x _ {n - 1 } ), $$

and in the case when the values of the independent variable are equally spaced:

$$ P _ {n} (x) \ = \ f (x _ {0} ) + \sum _ {k = 1 } ^ { n } \frac{\Delta ^ {k} f (x _ {0} ) }{k! h ^ {k} } (x - x _ {0} ) \dots (x - x _ {k - 1 } ). $$

The function $ f $ is set approximately equal to $ P _ {n} $. If $ f $ has a derivative of order $ n + 1 $, then the error in replacing $ f $ by $ P _ {n} $ can be estimated by the formula

$$ f (x) \ = \ P _ {n} (x) + \frac{f ^ {\ (n + 1) } ( \xi ) }{(n + 1)! } (x - x _ {0} ) \dots (x - x _ {n} ), $$

where $ \xi $ lies in the interval in which the points $ x,\ x _ {0} \dots x _ {n} $ ly. If $ f $ is a polynomial of degree $ \leq n $, then $ f = P _ {n} $.

As the number of nodes increases without bound, the interpolation polynomial $ P _ {n} $ becomes in the limit a polynomial $ P $ of "infinite" degree, and the following question naturally arises: When does $ f = P $? In other words, when does the following equation hold:

$$ \tag{1 } f (x) \ = \ f (x _ {0} ) + \sum _ {k = 1 } ^ \infty \frac{\Delta ^ {k} f (x _ {0} ) }{k! h ^ {k} } (x - x _ {0} ) \dots (x - x _ {k - 1 } ) $$

(for simplicity, the case of equally-spaced intervals is considered)? Let $ x _ {0} = 0 $, $ h = 1 $, so that $ x _ {n} = n $( $ n \geq 0 $). If the series (1) converges at a point $ \alpha $ that is not a node (the series (1) always converges at the nodes $ x _ {0} ,\ x _ {1} ,\dots $), then it converges in the half-plane $ \mathop{\rm Re} \ x > \alpha $ and is an analytic function in this half-plane which satisfies the following condition in any half-plane $ \mathop{\rm Re} \ x \geq \beta > \alpha $( where $ \epsilon > 0 $):

$$ | f (re ^ {i \phi } ) | \ < \ c e ^ {r h ( \phi ) } r ^ {\alpha + \epsilon + 1/2 } , $$

$$ h ( \phi ) \ = \ \cos \ \phi \ \mathop{\rm ln} (2 \ \cos \ \phi ) + \phi \ \sin \ \phi . $$

Conversely, if $ f $ is analytic in some half-plane and has similar order of growth (or somewhat better than it), then it can be represented by the series (1). Thus, functions of a very narrow class (only the analytic functions of the indicated growth) can be expanded in a series (1) (a so-called Newton series). Newton series are studied when the nodes are general complex numbers. These series have found great application in transcendental number theory. Suppose next that the interpolation nodes form a triangular array

$$ \begin{array}{llll} x _ {0,0} &{} &{} &{} \\ x _ {1,0} &x _ {1,1} &{} &{} \\ \dots &\dots &\dots &\dots \\ x _ {n,0} &x _ {n,1} &\dots &x _ {n,n} \\ \dots &\dots &\dots &\dots , \\ \end{array} $$

and that the interpolation polynomial $ P _ {n} $ is constructed from the nodes of the $ (n + 1) $- th row. The class of functions $ f $ for which $ P _ {n} $ converges to $ f $ as $ n \rightarrow \infty $ depends on the array of nodes. E.g., if

$$ x _ {n,k} \ = \ \cos \ \frac{2k + 1 }{2n } \pi ,\ \ k = 0 \dots n $$

( $ x _ {n,k} $ are the roots of the Chebyshev polynomials), then in order that the interpolation process converges on the interval $ [-1 ,\ 1] $ it suffices that the following condition holds:

$$ \lim\limits _ {n \rightarrow \infty } \ \omega \left ( { \frac{1}{n} } \right ) \ \mathop{\rm ln} \ n \ = \ 0, $$

where $ \omega ( \delta ) $ is the modulus of continuity (cf. Continuity, modulus of) of $ f $ on $ [-1,\ 1] $.

Another important problem in the calculus of finite differences is the problem of the summation of functions. Let $ f $ be a given function. It is required to find in finite form, exact or approximate, the sum

$$ S _ {n} \ = \ f (x _ {0} ) + f (x _ {0} + h) + \dots + f (x _ {0} + nh), $$

for fixed $ x _ {0} $, $ h $ and large $ n $, when certain analytic properties of $ f $ are known. In other words, the asymptotic behaviour of $ S _ {n} $ as $ n \rightarrow \infty $ is studied. Let $ x _ {0} = 0 $, $ h = 1 $( for simplicity) and suppose that a function $ F $ can be found such that

$$ \tag{2 } \Delta F (x) \ = \ F (x + 1) - F (x) \ = \ f (x). $$

Then

$$ \tag{3 } S _ {n} \ = \ F (n + 1) - F (0). $$

E.g., let $ f (x) = x ^ {2} $. The solution of (2) is sought in the form of a polynomial of degree three,

$$ Q (x) \ = \ ax ^ {3} + bx ^ {2} + cx , $$

with undetermined coefficients. On substituting into equation (2) and equating coefficients of corresponding powers of $ x $ on the left-hand and right-hand sides, the polynomial has the form:

$$ Q (x) \ = \ { \frac{x ^ {3} }{3} } - { \frac{x ^ {2} }{2} } + { \frac{x}{6} } \ = \ { \frac{x (x - 1) (2x - 1) }{6} } $$

and

$$ { \frac{n (n + 1) (2n + 1) }{6} } \ = \ 1 ^ {2} + 2 ^ {2} + \dots + n ^ {2} . $$

The solution to equation (2) cannot always be obtained in finite form. It is therefore useful to have approximate formulas for the $ S _ {n} $. Such a formula is the Euler–MacLaurin formula. If $ f $ has $ k $ derivatives and $ k $ is even, the Euler–MacLaurin formula can be written as

$$ \sum _ {x = 0 } ^ { {n } - 1 } f (x) \ = \ \int\limits _ { 0 } ^ { n } f (x) \ dx + $$

$$ + \sum _ {\nu = 1 } ^ { {k } - 1 } \frac{B _ \nu }{\nu ! } \{ f ^ {\ ( \nu - 1) } (n) - f ^ {\ ( \nu - 1) } (0) \} + \frac{B _ {k} }{k! } \sum _ {x = 0 } ^ { {n } - 1 } f ^ {\ (k) } (x + \theta ), $$

where $ 0 < \theta < 1 $( in general $ \theta $ depends on $ n $), and the $ B _ \nu $ are the Bernoulli numbers. If $ f $ is a polynomial of degree less than $ k $, the remainder term vanishes.

There is a similarity between the problems of the calculus of finite differences and those of differential and integral calculus. The operation of finding the difference corresponds to that of finding the derivative; the solution of equation (2), which, as an operation, is the inverse of finding the finite difference, corresponds to finding a primitive, that is, an indefinite integral. Formula (3) is a direct analogue of the Newton–Leibniz formula. This similarity emerges in considering finite-difference equations. By a finite-difference equation is meant a relation

$$ F (x,\ f (x),\ \Delta f (x) \dots \Delta ^ {n} f (x)) \ = \ 0, $$

where $ F $ is a given function and $ f $ is the required function. If all the $ \Delta ^ {n} f (x) $ are expressed in terms of $ f (x),\ f (x + 1) \dots f (x + n) $, then the finite-difference equation is written in the form

$$ \tag{4 } \Phi (x,\ f (x),\ f (x + 1) \dots f (x + n)) \ = \ 0. $$

Its solution with respect to $ f (x + n) $ is:

$$ \tag{5 } f (x + n) \ = \ \psi (x,\ f (x) \dots f (x + n - 1)). $$

For given initial values $ f (x _ {0} ) \dots f (x _ {0} + n - 1) $ one can successively find $ f (x _ {0} + n),\ f (x _ {0} + n + 1) $, etc. After solving (4) with respect to $ f (x) $:

$$ f (x) \ = \ \phi (x,\ f (x + 1) \dots f (x + n)), $$

one can, by setting $ x = x _ {0} - 1 $, find $ f (x _ {0} - 1) $, then $ f (x _ {0} - 2) $, etc. Thus, from the equation in terms of the initial data one can find the values of $ f $ at all the points $ x _ {0} + k $, where $ k $ is an integer. Consider, for $ x = 0,\ 1 \dots $ the linear equation

$$ \tag{6 } f (x + k) + P _ {1} (x) f (x + k - 1) + \dots + P _ {k} (x) f (x) \ = \ Q (x), $$

where $ P _ {1} (x) \dots P _ {k} (x) $ and $ Q (x) $ are given functions on the set $ x = 0,\ 1 ,\dots $. The general solution of the inhomogeneous equation (6) is the sum of a particular solution of the inhomogeneous equation and the general solution of the homogeneous equation

$$ \tag{7 } f (x + k) + P _ {1} (x) f (x + k - 1) + \dots + P _ {k} (x) f (x) \ = \ 0. $$

If $ f _ {1} \dots f _ {k} $ are linearly independent solutions of (7), then the general solution of (7) is given by the formula

$$ f (x) \ = \ c _ {1} f _ {1} (x) + \dots + c _ {k} f _ {k} (x), $$

where $ c _ {1} \dots c _ {k} $ are arbitrary constants. The constants $ c _ {1} \dots c _ {k} $ can be found by prescribing the initial conditions, that is, the values of $ f (0) \dots f (k - 1) $. Linearly independent solutions $ f _ {1} \dots f _ {k} $( a fundamental system) are easily found in the case of an equation with constant coefficients:

$$ \tag{8 } f (x + k) + a _ {1} f (x + k - 1) + \dots + a _ {k} f (x) \ = \ 0. $$

The solution of (8) is sought in the form $ f (x) = \lambda ^ {x} $. The characteristic equation for $ \lambda $ is:

$$ \lambda ^ {k} + a _ {1} \lambda ^ {k - 1 } + \dots + a _ {k} \ = \ 0. $$

Suppose that its roots $ \lambda _ {1} \dots \lambda _ {k} $ are all distinct. Then the system $ \lambda _ {1} ^ {x} \dots \lambda _ {k} ^ {x} $ is a fundamental system of solutions of (8) and the general solution of (8) can be represented by the formula:

$$ f (x) \ = \ c _ {1} \lambda _ {1} ^ {x} + \dots + c _ {k} \lambda _ {k} ^ {x} . $$

If $ \lambda _ {1} $ is a root of the characteristic equation of multiplicity $ s $, then the particular solutions corresponding to it are $ \lambda _ {1} ,\ x \lambda _ {1} \dots x ^ {s - 1 } \lambda _ {1} $.

Suppose, for example, that one considers the sequence of numbers starting with 0 and 1 in which every successive number is equal to the sum of its two immediate predecessors: 0, 1, 1, 2, 3, 5, 8, 13 $ ,\dots $( the Fibonacci numbers). One seeks an expression for the general term of the sequence. Let $ f (x) $, $ x = 0,\ 1 \dots $ be the general term of the sequence; the conditions

$$ f (x + 2) \ = \ f (x + 1) + f (x),\ \ f (0) \ = \ 0,\ \ f (1) \ = \ 1, $$

form a difference equation with given initial conditions. The characteristic equation has the form $ \lambda ^ {2} - \lambda - 1 = 0 $, its roots being $ \lambda _ {1} = (1 + \sqrt 5 )/2 $, $ \lambda _ {2} = (1 - \sqrt 5 )/2 $. Therefore

$$ f (x) \ = \ c _ {1} \left ( { \frac{1 + \sqrt 5 }{2} } \right ) ^ {x} + c _ {2} \left ( { \frac{1 - \sqrt 5 }{2} } \right ) ^ {x} ; $$

$ c _ {1} = 1/ \sqrt 5 $ and $ c _ {2} = - 1/ \sqrt 5 $ being found from the initial conditions.

Equation (4) can be found not only when $ x $ varies discretely, taking values $ 0,\ 1 \dots $ but also when $ x $ varies continuously. Let $ f $ be arbitrarily defined in the half-open interval $ [0,\ n) $. From (5) one obtains $ f (n) $ by setting $ x = 0 $. If $ f $ is continuously defined in $ [0,\ n) $, then $ f $ may prove to be discontinuous in the closed interval $ [0,\ n] $. If one wishes to deal with continuous solutions, $ f $ needs to be defined on $ [0,\ n) $ in such a way that by virtue of (5) it proves to be continuous in $ [0,\ n] $. Knowledge of $ f $ in $ [0,\ n] $ enables one to find from (5) $ f (x) $ for $ x \in (n,\ n + 1] $, then for $ x \in (n + 1,\ n + 2] $, etc.

More general than (8) is the equation

$$ \tag{9 } f (x + h _ {k} ) + a _ {1} f (x + h _ {k - 1 } ) + \dots + a _ {k} f (x) \ = \ 0. $$

Here $ 0 < h _ {1} < \dots < h _ {k} $ are not necessarily integers and are not necessarily commensurable relative to one another. Equation (9) has the particular solutions

$$ f (x) \ = \ e ^ {\lambda x } , $$

where $ \lambda $ is a root of the equation

$$ L ( \lambda ) \ \equiv \ e ^ {h _ {k} \lambda } + a _ {1} e ^ {h _ {k - 1 } \lambda } + \dots + a _ {k} \ = \ 0. $$

This equation has an infinite number of roots $ \lambda _ {1} ,\ \lambda _ {2} ,\dots $. Consequently, (9) has an infinite number of particular solutions $ e ^ {\lambda _ {m} x } $, $ m = 1,\ 2 ,\dots $. Suppose that all the roots are simple. To express the solutions of (9) in terms of these elementary particular solutions, it is convenient to write the equation in the form:

$$ \tag{10 } \int\limits _ { 0 } ^ \alpha f (x + t) \ d \sigma (t) \ = \ 0,\ \ \alpha = h _ {k} , $$

where $ \sigma (t) $ is a step function having jumps at the points $ 0,\ h _ {1} \dots h _ {k} $, equal to $ a _ {k} ,\ a _ {k-1} \dots 1 $, respectively. Let

$$ \psi _ \nu (t) \ = \ \frac{- e ^ {\lambda _ \nu t } }{L ^ \prime ( \lambda _ \nu ) } \int\limits _ { 0 } ^ { t } e ^ {\lambda _ \nu \xi } \ d \sigma ( \xi ),\ \ \nu \geq 1. $$

The functions $ \psi _ \nu (t) $ have the property:

$$ \int\limits _ { 0 } ^ \alpha e ^ {\lambda _ {m} t } \psi _ \nu (t) \ dt \ = \ \delta _ {m \nu } $$

( $ \delta _ {m \nu } = 1 $ if $ m = \nu $, and $ \delta _ {m \nu } = 0 $ if $ m \neq \nu $), that is, they form a system that is bi-orthogonal to the system $ \{ e ^ {\lambda _ {m} t } \} $. On this basis, the solution $ f $ of (10) corresponds to the series

$$ \tag{11 } f (x) \ \sim \ \sum _ {\nu = 1 } ^ \infty c _ \nu e ^ {\lambda _ \nu x } , $$

$$ c _ \nu \ = \ \int\limits _ { 0 } ^ \alpha f (t) \psi _ \nu (t) \ dt. $$

If (9) has the form

$$ \tag{12 } f (x + 2 \pi ) - f (x) \ = \ 0 $$

(that is, $ f $ is a periodic function with period $ 2 \pi $); $ L (x) = e ^ {2 \pi x } - 1 $; the roots of the equation $ L ( \lambda ) = 0 $ are $ mi $( $ m = 0,\ \pm 1 ,\dots $); and (11) is the Fourier series of $ f $ in complex form. The series (11) can be regarded as a generalization to the case of the difference equation (9) of the ordinary Fourier series corresponding to the simplest difference equation (12). Under certain conditions the series (11) converges to the solution $ f $. If $ f $ is an analytic function, then (9) is expressible in the form of an equation of infinite order

$$ \sum _ {n = 0 } ^ \infty \alpha _ {n} f ^ {\ (n) } (x) \ = \ 0. $$

Differences of functions of several variables are introduced by analogy with differences of functions of one variable. For example, suppose that it is required to solve the problem of finding numerically the solution of the Laplace equation

$$ \frac{\partial ^ {2} u }{\partial x ^ {2} } + \frac{\partial ^ {2} u }{\partial y ^ {2} } \ = \ 0 $$

in the rectangle $ 0 \leq x \leq a $, $ 0 < y < b $, for given values of $ u $ on the boundary of the rectangle. The rectangle is divided into small rectangular cells with sides $ \Delta x = a/N $, $ \Delta y = b/M $. The values of the solution are sought at the vertices of these cells. At the vertices that lie on the boundary of the original rectangle, the values of $ u $ are known. By applying the approximation (where second-order differences are in the numerators)

$$ \frac{\partial ^ {2} u }{\partial x ^ {2} } \ \approx \ \frac{u (x + \Delta x,\ y) - 2u (x,\ y) + u (x - \Delta x,\ y) }{\Delta x ^ {2} } , $$

$$ \frac{\partial ^ {2} u }{\partial y ^ {2} } \ \approx \ \frac{u (x,\ y + \Delta y) - 2u (x,\ y) + u (x,\ y - \Delta y) }{\Delta y ^ {2} } , $$

one obtains instead of the Laplace equation the following system of equations:

$$ \frac{u (x + \Delta x,\ y) - 2u (x,\ y) + u (x - \Delta x,\ y) }{\Delta x ^ {2} } + $$

$$ + \frac{u (x,\ y + \Delta y) - 2u (x,\ y) + u (x,\ y - \Delta y) }{\Delta y ^ {2} } \ = \ 0. $$

The point $ (x,\ y) $ runs through those vertices of cells that are situated in the interior of the original rectangle. Thus, a system of $ (N - 1) (M - 1) $ equations is constructed containing the same number of unknowns. By solving this algebraic system of equations, the values of $ u $ at the vertices of the rectangles are obtained. When $ \Delta x $ and $ \Delta y $ are small and the solution of the problem has a specified smoothness, the values so obtained are close to the exact values.

The calculus of finite differences was developed in parallel with that of the main branches of mathematical analysis. The calculus of finite differences first began to appear in works of P. Fermat, I. Barrow and G. Leibniz. In the 18th century it acquired the status of an independent mathematical discipline. The first systematic account of the calculus of finite differences was given by B. Taylor in 1715. The researches of mathematicians of the 19th century prepared the ground for the modern branches of the calculus of finite differences. The ideas and methods of the calculus of finite differences have been considerably developed in their application to analytic functions of a complex variable and to problems of numerical mathematics.

References

[1] A.A. Markov, "The calculus of finite differences" , Odessa (1910) (In Russian)
[2] I.S. Berezin, N.P. Zhidkov, "Computing methods" , Pergamon (1973) (Translated from Russian)
[3] A.O. [A.O. Gel'fond] Gelfond, "Differenzenrechnung" , Deutsch. Verlag Wissenschaft. (1958) (Translated from Russian)
[4] N.S. Bakhvalov, "Numerical methods: analysis, algebra, ordinary differential equations" , MIR (1977) (Translated from Russian)

Comments

See also Interpolation; Interpolation formula; Lagrange interpolation formula; Newton interpolation formula; Approximation of functions, linear methods.

References

[a1] L.M. Milne-Thomson, "The calculus of finite differences" , Macmillan (1933) Zbl 0008.01801; reprinted Dover (1981) Zbl 0477.39001
[a2] A.A. Samarskii, "Theorie der Differenzverfahren" , Akad. Verlagsgesell. Geest & Portig K.-D. (1984) (Translated from Russian) Zbl 0543.65067
[b1] Maurice V. Wilkes, "A short introduction to numerical analysis", Cambridge University Press (1966) ISBN 0-521-09412-7 Zbl 0149.10902
How to Cite This Entry:
Finite-difference calculus. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Finite-difference_calculus&oldid=44401
This article was adapted from an original article by A.F. Leont'ev (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article