# Trigonometric pseudo-spectral methods

Trigonometric pseudo-spectral methods, and spectral methods in general, are methods for solving differential and integral equations using trigonometric functions as the basis.

Suppose the boundary value problem $L u = f$ is to be solved for $u ( x )$ on the interval $x = [ a , b ]$, where $L$ is a differential operator in $x$ and $f ( x )$ is some given smooth function (cf. also Boundary value problem, ordinary differential equations). Also, $u$ must satisfy given boundary conditions $u ( a ) = u _ { a }$ and $u ( b ) = u _ { b }$.

As in most numerical methods, an approximate solution, $u _ { N}$, is sought which is the sum of $N + 1$ basis functions, $\phi _ { n } ( x )$, $n = 0 , \ldots , N$, in the form $u _ { N } = \sum _ { n = 0 } ^ { N } a _ { n } \phi _ { n } ( x )$, where the coefficients $a _ { n }$ are the finite set of unknowns for the approximate solution. A "residual equation" , formed by plugging the approximate solution into the differential equation and subtracting the right-hand side, $R ( x ; a _ { 0 } , \dots , a _ { N } ) \equiv L [ u _ { N } ( x ) ] - f$, is then minimized over the interval to find the coefficients. The difference between methods boils down to the choice of basis and how $R$ is minimized. The basis functions should be easy to compute, be complete or represent the class of desired functions in a highly accurate manner, and be orthogonal (cf. also Complete system of functions; Orthogonal system). In spectral methods, trigonometric functions and their relatives as well as other orthogonal polynomials are used.

If the basis functions are trigonometric functions such as sines or cosines, the method is said to be a Fourier spectral method. If, instead, Chebyshev polynomials are used, the method is a Chebyshev spectral method.

The method of mean weighted residuals is used to minimize $R$ and find the unknowns coefficients $a _ { n }$. An inner product $( \, . \, , \, . \, )$ and weight function $\rho ( x )$ are defined, as well as $N + 1$ test functions $w_i$ such that $( w _ { i } , R ) = 0$ for $i = 0 , \ldots , N$ and $( u , v ) = \int _ { a } ^ { b } u ( x ) v ( x ) \rho ( x ) d x$. This yields $N + 1$ equations for the $N + 1$ unknowns. Pseudo-spectral methods, including Fourier pseudo-spectral and Chebyshev pseudo-spectral methods, have Dirac delta-functions (cf. also Dirac distribution) as their test functions: $w _ { i } ( x ) = \delta ( x - x _ { i } )$, where $x_{i}$ are interpolation or collocation points. The residual equation becomes $R ( x _ { i } ; a _ { 0 } , \dots , a _ { N } ) = 0$ for $i = 0 , \ldots , N$.

The Galerkin method uses the basis functions as the test functions. If $L$ is linear, the following matrix equation can be formed: $L _ { m , n } a _ { n } = f _ { m }$, where $L _ { m , n } = ( \phi _ { m } , L \phi_{ n } )$ and $f _ { m } = ( f , \phi _ { m } )$. An alternative Galerkin formulation can be found by transforming the residual equation into spectral space $R ( x ; a _ { 0 } , \dots , a _ { N } ) = \sum _ { n } r _ { n } ( a _ { 0 } , \dots , a _ { N } ) \phi _ { n } ( x )$ and setting $r _ { n } = 0$ for $n = 0 , \ldots , N$. In using Gauss–Jacobi integration to evaluate the inner products of the Galerkin method, the integrands are interpolated at the zeros of the $N + 1$st basis function. By using the same set of points as collocation points for a pseudo-spectral method, the two methods are made equivalent. Problems can be cast in either grid-point or spectral coefficient representation. For trigonometric bases, this result allows the complexity of computation to be reduced in many problems through the use of fast transforms.

A main difference between spectral and other methods, such as finite difference or finite element methods, is that in the latter the domain is divided into smaller subdomains in which local basis functions of low order are used. With the basis functions frozen, more accuracy is gained by decreasing the size of the subdomains. In spectral methods, the domain is not subdivided, but global basis functions of high order are used. Accuracy is gained by increasing the number and order of the basis functions.

The lower-order methods produce algebraic systems which can be represented as sparse matrices. Spectral methods usually produce full matrices. The solution then involves finding the inverse. Through the use of orthogonality and fast transforms, full matrix inversion can usually be accomplished with a complexity similar to the sparse matrices.

Boundary conditions are handled in a reasonably straightforward manner. Sometimes the boundary conditions are satisfied automatically, such as with periodicity and a Fourier method. With other types of conditions, an extra equation may be added to the system to satisfy it, or the basis functions may be modified to automatically satisfy the conditions.

The attractiveness of spectral methods is that they have a greater than algebraic convergence rate for smooth solutions. A simple finite difference approximation has a convergence $| u - u _ { N } | = O ( h ^ { \alpha } )$, where $h = ( b - a ) / N$ and $\alpha$ is an integer; double the number of points in the interval and the error goes down by a factor $2 ^ { \alpha }$. The convergence rate of spectral methods is $O ( h ^ { k } )$, sometimes called exponential, infinite or spectral, stemming from the fact that convergence of a trigonometric series is geometric. If the solution is not smooth, however, spectral methods will have an algebraic convergence linked to the continuity of the solution.

Rapid convergence allows fewer unknowns to be used, but more computational processing per unknown. Hence spectral methods are particularly attractive for problem/computer matches in which memory and not computing power is the critical factor.

Multi-dimensional problems are handled by tensor-product basis functions, which are basis functions that are products of $1$-dimensional basis functions. Other orthogonal polynomials can be used in pseudo-spectral methods, such as Legendre and Hermite and spherical harmonics for spherical geometries.

A disadvantage of spectral methods is that only relatively simple domains and boundaries can be handled. Spectral element methods, a combination of spectral and finite element methods, have in many cases overcome this difficulty. Another difficulty is that spectral methods are, in general, more complicated to code and require more analysis to be done prior to coding than simpler methods.

Aliasing is a phenomenon in which modes of degree higher than in the expansion are interpreted as modes that are within the range of the expansion. This occurs in, say, a problem with quadratic non-linearity where twice the range is created. If the coefficients near the upper limit are sufficiently large in magnitude, there may be a significant error associated with aliased modes. For a Fourier pseudo-spectral method, the coefficient $a _ { N / 2 + k}$ is interpreted as a coefficient $a _ { N / 2 - k}$. By zeroing the upper $1 / 3$ of the coefficients, the quadratic nonlinearity will only fill a range from $2 / ( 3 N / 2 )$ to $4 / ( 3 N / 2 )$. This will only produce aliasing errors for modes $2 / ( 3 N / 2 )$ to $N / 2$, but these are to be zeroed anyway. This "2 / 3" rule removes errors for one-dimensional problems with quadratic non-linearity. It is debatable, however, whether in a "well-resolved" simulation there is need to address aliasing errors.

How to Cite This Entry:
Trigonometric pseudo-spectral methods. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Trigonometric_pseudo-spectral_methods&oldid=50082
This article was adapted from an original article by Richard B. Pelz (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article