Namespaces
Variants
Actions

Monte-Carlo methods for partial differential equations

From Encyclopedia of Mathematics
Jump to: navigation, search


Deterministic partial differential equations can be solved numerically by probabilistic algorithms such as Monte-Carlo methods, stochastic particle methods, ergodic algorithms, etc. (cf. also Differential equation, partial).

The simplest example is the heat equation in $ C ^ {1,2 } ( ( 0,T ] \times \mathbf R ^ {d} ) $:

$$ \left \{ \begin{array}{l} { { \frac{\partial u }{\partial t } } = { \frac{1}{2} } \Delta u, \ } \\ {u ( t,x ) \rightarrow u _ {0} ( x ) \ \textrm{ as } t \rightarrow 0, } \\ {\ \textrm{ for all } x \textrm{ at which } \ , u _ {0} \textrm{ is continuous } . } \end{array} \right . $$

For a large class of functions $ u _ {0} $, the unique solution is $ u ( t,x ) = g _ {t} * u _ {0} ( x ) $, where $ g _ {t} $ is the heat kernel. Thus, $ u ( t,x ) = {\mathsf E} u _ {0} ( W _ {t} ( x ) ) $, where $ ( W _ {t} ( x ) ) $ is any Brownian motion such that $ W _ {0} = x $, almost surely. The strong law of large numbers applies: if $ \{ ( W _ {t} ^ {i} ( x ) ) \} $ are independent copies of $ ( W _ {t} ( x ) ) $, one has, almost surely,

$$ u ( t,x ) = {\lim\limits } _ {N \rightarrow \infty } { \frac{1}{N} } \sum _ {i = 1 } ^ { N } u _ {0} ( W _ {t} ^ {i} ( x ) ) . $$

A Monte-Carlo procedure to compute $ u ( t, \cdot ) $ at a single point $ x $ consists in the simulation of $ N $ independent copies of $ W _ {t} ( x ) $ and the computation of

$$ { \frac{1}{N} } \sum _ {i = 1 } ^ { N } u _ {0} ( W _ {t} ^ {i} ( x ) ) . $$

This procedure requires the simulation of $ N \times d $ independent real-valued Gaussian random variables of mean $ 0 $ and variance $ t $( cf. Random variable; Gaussian process). The convergence rate of the procedure is described by fluctuation results, large-deviation estimates, etc. (cf. also Probability of large deviations). The most elementary estimate concerns the standard deviation of the random error:

$$ { \frac{1}{\sqrt N } } \sqrt { {\mathsf E} \left | {u _ {0} ( W _ {t} ( x ) ) } \right | ^ {2} - \left | {u ( t,x ) } \right | ^ {2} } . $$

More generally, let $ L $ be an integro-differential operator (cf. also Integro-differential equation), and consider the evolution problem

$$ { \frac{d}{dt } } u ( t,x ) = Lu ( t,x ) , u ( 0,x ) = u _ {0} ( x ) . $$

From now on, it is assumed that there is a $ \mathbf R ^ {d} $- valued Markov process $ ( X _ {t} ) $ such that $ u ( t,x ) = {\mathsf E} u ( 0,X _ {t} ( x ) ) $( for a discussion on the assumptions that $ L $ must satisfy, see [a3], Chap. XIII, Sect. 3). Then, if it is possible to simulate independent copies of the process $ ( X _ {t} ( x ) ) $, one can approximate $ u ( t,x ) $ by

$$ { \frac{1}{N} } \sum _ {i = 1 } ^ { N } u _ {0} ( X ^ {i} _ {t} ( x ) ) . $$

For example, in neutronics, the process $ ( X _ {t} ) $ is the pair (position,velocity), and the velocity is a pure jump process (in this case, $ L $ is a transport operator) whose simulation is simple: one simulates the times at which the velocity changes and the new velocity after each change time independently (the change times have an exponential law).

Most often, the law of $ X _ {t} ( x ) $ cannot be simulated exactly, but it may be possible to approximate it suitably. For example, when $ ( X _ {t} ( x ) ) $ is the solution of a stochastic differential equation driven by a Brownian motion (in this case, $ L $ is a second-order differential operator), one can discretize the stochastic differential equation in time. The simplest discretization scheme (the Euler scheme) defines an approximate process $ ( X ^ {h} _ {t} ( x ) ) $ depending on the discretization step $ h $, whose simulation requires the simulation of independent Gaussian random variables only. The approximation formula is

$$ u ( t,x ) \simeq { \frac{1}{N} } \sum _ {i = 1 } ^ { N } u _ {0} ( X _ {t} ^ {h,i } ( x ) ) . $$

The global error of this procedure can be decomposed as the sum of

$$ u ( t,x ) - {\mathsf E} u _ {0} ( X ^ {h} _ {t} ( x ) ) $$

(the discretization error) and of

$$ {\mathsf E} u _ {0} ( X ^ {h} _ {t} ( x ) ) - { \frac{1}{N} } \sum _ {i = 1 } ^ { N } u _ {0} ( X _ {t} ^ {h,i } ( x ) ) $$

(the statistical error). Under appropriate assumptions, the discretization error can be expanded in terms of $ h $.

The numerical procedure can be adapted to the solution of problems in bounded domains. Dirichlet boundary conditions lead to the simulation of processes stopped at the boundary. The simulation of reflected diffusion processes (cf. Diffusion process) solves parabolic or elliptic partial differential equations with suitable Neumann boundary conditions.

Let $ L ^ {*} $ be the formal adjoint of $ L $. Note that the probability distribution $ \mu _ {t} ^ {x} $ of $ X _ {t} ( x ) $ is a solution to

$$ { \frac{d}{dt } } \mu _ {t} ^ {x} = L ^ {*} \mu _ {t} ^ {x} $$

in the sense of distributions (cf. Generalized function). The solution $ \mu _ {t} ^ {x} $ can be approximated by the empirical distribution

$$ { \frac{1}{N} } \sum _ {i = 1 } ^ { N } \delta _ {X _ {t} ^ {i} ( x ) } . $$

This procedure is used in neutronics.

To certain non-linear partial differential equations one can associate processes $ ( X _ {t} ) $ which are not Markovian, but are such that the pairs $ ( X _ {t} , \mu _ {t} ) $ are Markovian, where $ \mu _ {t} $ is the probability law of the random variable $ X _ {t} $. Besides, in some cases, the law $ P ^ {X} $ of the process $ ( X _ {t} ) $ can be described by means of the theory of propagation of chaos (see [a4] and the references therein). That means that the law $ P ^ {X} $ is the limit in probability, when $ N $ goes to infinity, of the empirical distributions of the paths of $ N $ particles in a mean field interaction. The interaction between the particles is related to the non-linear differential operator of the partial differential equation under consideration. If one can simulate the stochastic particle system, then an algorithm can be built from the formula

$$ \mu _ {t} \simeq { \frac{1}{N} } \sum _ {i = 1 } ^ { N } \delta _ {X _ {t} ^ {i} } , $$

where $ \{ X ^ {i} _ {t} \} $ are the locations at time $ t $ of the interacting particles. For example, consider an interaction kernel $ K $, a family of $ N $ independent Brownian motions, and the system

$$ X _ {t} ^ {i} = X _ {0} ^ {i} + \int\limits _ { 0 } ^ { t } { { \frac{1}{N} } \sum _ {j = 1 } ^ { N } K ( X _ {s} ^ {i} - X _ {s} ^ {j} ) } {ds } + \nu W _ {t} ^ {i} . $$

The time discretization of such systems permits one to approximate equations such as Vlasov–Poisson–Fokker–Planck equations (cf. Vlasov–Poisson–Fokker–Planck system), the viscous Burgers equation (cf. Diffusion equation; Turbulence, mathematical problems in), the incompressible Navier–Stokes equation for the vorticity in $ \mathbf R ^ {2} $( cf. Navier–Stokes equations), etc. In the latter case, $ K $ is the Biot–Savart kernel, and the algorithm is the Chorin random vortex method (see [a2] for numerical considerations). Similarly, the simulation of interacting branching processes (cf. Branching process) permits one to approximate the solutions of convection-reaction-diffusion equations. One difficulty of the mathematical study of these methods is to show that the statistical errors (on the distribution function of $ \mu _ {t} $, e.g.) have standard deviations of order $ {1 / {\sqrt N } } $ although the particles are dependent. For theoretical aspects related to probabilistic algorithms for partial differential equations, including the Boltzmann equation, see, for instance, [a1] and the references therein.

References

[a1] C. Graham, T. Kurtz, S. Méléard, P. Protter, M. Pulvirenti, D. Talay, "Probabilistic models for nonlinear partial differential equations" , Lecture Notes in Mathematics , 1627 , Springer (1996)
[a2] "Vortex methods and vortex motions" K.E. Gustavson (ed.) J.A. Sethian (ed.) , SIAM (1991)
[a3] J. Jacod, "Calcul stochastique et problèmes de martingales" , Lecture Notes in Mathematics , 714 , Springer (1979)
[a4] A.S. Sznitman, "Topics in propagation of chaos" P.L. Hennequin (ed.) , Ecole d'Eté de Probabilités de Saint-Flour XI (1989) , Lecture Notes in Mathematics , 1464 , Springer (1991) pp. 165–251
How to Cite This Entry:
Monte-Carlo methods for partial differential equations. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Monte-Carlo_methods_for_partial_differential_equations&oldid=47896
This article was adapted from an original article by D. Talay (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article