# Lie symmetry analysis

*with symbolic software*

The Norwegian mathematician S. Lie pioneered the study of continuous Lie transformation groups (cf. Lie transformation group) that leave invariant systems of differential equations. As a result of Lie's work [a2], [a3], diverse and ad hoc integration methods for solving special classes of differential equations came under a common conceptual umbrella. For ordinary differential equations (ODEs), Lie's infinitesimal transformation method provides a widely applicable technique to find closed-form similarity solutions. Nearly all standard solution methods for first-order or linear ODEs can be characterized in terms of symmetries. Through the group classification of ODEs, Lie also succeeded in identifying all ODEs that can either be reduced to lower-order ones or be completely integrated via group-theoretic techniques.

Applied to partial differential equations (PDEs), Lie's method leads to group-invariant solutions and conservation laws. Exploiting the symmetries of PDEs, new solutions can be derived from known ones, and PDEs can be classified into equivalence classes. Furthermore, group-invariant solutions obtained via Lie's approach may provide insight into the physical models themselves, and explicit solutions can serve as benchmarks in the design, accuracy testing, and comparison of numerical algorithms.

Lie's original ideas had great potential to profoundly influence the study of physically important systems of differential equations. However, the application of Lie group methods to concrete physical systems involves tedious and unwieldy computations. Even the calculation of the continuous symmetry group of a modest system of differential equations is prone to errors, if done with pencil and paper. The availability of computer algebra systems (such as Mathematica or Maple) has changed all that. There now exist many symbolic packages that can aid in the computation of Lie symmetries and similarity solutions of differential equations. Sophisticated packages not only automatically compute the system of determining equations of the Lie symmetry group, but also reduce these into an equivalent yet more suitable system, subsequently solve it in closed form, and go on to calculate the infinitesimal generators that span the Lie algebra of symmetries. In [a1], detailed information is given about numerous Lie symmetry computer packages, together with a review of their strengths and weaknesses.

The classical Lie symmetry group of a system of differential equations is a local group of point transformations, meaning diffeomorphisms on the space of independent and dependent variables, that map solutions of the system into other solutions.

## Contents

## Elementary examples of Lie point symmetries.

### Example 1.

This example illustrates the concept of Lie's method. It is well known that homogeneous first-order ODEs, like

$$ \tag{a1 } y ^ \prime = { \frac{y ^ {2} + 2 x y }{x ^ {2} } } , $$

can be simplified upon substitution of $ y = x v ( x ) $. Indeed, (a1) then reduces to $ x v ^ \prime = v + v ^ {2} $, which can be readily integrated, leading to $ y ( x ) = { {c x ^ {2} } / {( 1 - cx ) } } $, where $ c $ is the integration constant, as solution of (a1).

Lie realized that the substitution $ y = x v $ leads to a separable equation because (a1) is invariant under the one-parameter group of scaling transformations, with parameter $ \epsilon $:

$$ {\widetilde{x} } ( \epsilon ) = x { \mathop{\rm exp} } ( \epsilon ) , {\widetilde{y} } ( \epsilon ) = y { \mathop{\rm exp} } ( \epsilon ) , $$

which obviously leaves invariant the quantity

$$ v = { \frac{y}{x} } = { \frac{ {\widetilde{y} } }{ {\widetilde{x} } } } . $$

### Example 2.

Consider the Riccati equation

$$ \tag{a2 } y ^ \prime + y ^ {2} - { \frac{2}{x ^ {2} } } = 0, $$

which is invariant under the one-parameter group of transformations, $ {\widetilde{x} } ( \epsilon ) = x { \mathop{\rm exp} } ( \epsilon ) , $ $ {\widetilde{y} } ( \epsilon ) = y { \mathop{\rm exp} } ( {- \epsilon } ) . $ Hence, if $ y = f ( x ) $ solves (a2), then $ {\widetilde{y} } ( {\widetilde{x} } ) = { \mathop{\rm exp} } ( - \epsilon ) f ( {\widetilde{x} } { \mathop{\rm exp} } ( - \epsilon ) ) $ solves (a2) with tilde on all the variables. Hence, starting with a known solution, Lie's method yields a family of new solutions. Quite often, interesting solutions can be obtained from trivial ones.

### Example 3.

This example shows that Lie's method is applicable to PDE's, such as the linear heat equation,

$$ \tag{a3 } { \frac{\partial u }{\partial t } } - { \frac{\partial ^ {2} u }{\partial x ^ {2} } } = u _ {t} - u _ {xx } = 0, $$

which admits, amongst several others, the one-parameter group of combined scalings:

$$ {\widetilde{x} } ( \epsilon ) = x { \mathop{\rm exp} } ( \epsilon ) , {\widetilde{t} } ( \epsilon ) = t { \mathop{\rm exp} } ( {2 \epsilon } ) , {\widetilde{u} } ( \epsilon ) = u. $$

Therefore, if $ u = f ( x,t ) $ solves (a3), so will

$$ u = f ( x { \mathop{\rm exp} } ( {- \epsilon } ) ,t { \mathop{\rm exp} } ( {- 2 \epsilon } ) ) . $$

A less obvious symmetry group of (a3) is determined by $ {\widetilde{x} } ( \epsilon ) = x + 2 \epsilon t $, $ {\widetilde{t} } ( \epsilon ) = t $, $ {\widetilde{u} } ( \epsilon ) = u { \mathop{\rm exp} } ( - \epsilon x - \epsilon ^ {2} t ) $, which expresses that $ u = { \mathop{\rm exp} } ( - \epsilon x + \epsilon ^ {2} t ) f ( x - 2 \epsilon t, t ) $ is a solution to (a3) when $ u = f ( x,t ) $ is.

## Computation of Lie point symmetries.

There are two major methods to compute Lie symmetries. The first method, which is implemented in most of the Lie symmetry packages, uses prolonged vector fields, the second one utilizes Cartan's exterior calculus.

The steps of the prolongation method can be summarized as follows.

For a system of $ m $ differential equations,

$$ \tag{a4 } \Delta ^ {i} ( \mathbf x, \mathbf u ^ {( k ) } ) = 0, i = 1 \dots m, $$

of arbitrary order $ k $, with $ p $ independent variables $ \mathbf x = ( x _ {1} \dots x _ {p} ) \in \mathbf R ^ {p} $ and $ q $ dependent variables $ \mathbf u = ( u ^ {1} \dots u ^ {q} ) \in \mathbf R ^ {q} $, the partial derivatives of $ u ^ {l} $ are represented using a multi-index notation,

$$ \tag{a5 } u _ {\mathbf J} ^ {l} \equiv { \frac{\partial ^ {\left | \mathbf J \right | } u ^ {l} }{\partial x _ {1} ^ {j _ {1} } \dots \partial x _ {p} ^ {j _ {p} } } } , $$

where for $ \mathbf J = ( j _ {1} \dots j _ {p} ) \in \mathbf N ^ {p} $, $ | \mathbf J | = j _ {1} + \dots + j _ {p} $, and $ \mathbf u ^ {( k ) } $ stands for the vector whose components are the partial derivatives up to order $ k $ of all $ u ^ {l} $.

The group transformations, parametrized by $ \epsilon, $ have the form $ {\widetilde{\mathbf x} } ( \epsilon ) = \Lambda _ {G} ( \mathbf x, \mathbf u , \epsilon ) $, $ {\widetilde{\mathbf u} } ( \epsilon ) = \Omega _ {G} ( \mathbf x, \mathbf u , \epsilon ) $, where the functions $ \Lambda _ {G} $ and $ \Omega _ {G} $ are to be determined. Lie realized that the one-parameter Lie group $ G $ can be completely recovered from the knowledge of the linear terms in the Taylor series of $ \Lambda _ {G} $ and $ \Omega _ {G} $:

$$ { {x _ {i} } tilde } ( \epsilon ) = x _ {i} + \epsilon \left . { \frac{\partial \Lambda _ {G} ( \mathbf x, \mathbf u , \epsilon ) }{\partial \epsilon } } \right | _ {\epsilon = 0 } + {\mathcal O} ( \epsilon ^ {2} ) = $$

$$ = x _ {i} + \epsilon \eta ^ {i} ( \mathbf x, \mathbf u ) + {\mathcal O} ( \epsilon ^ {2} ) , i = 1 \dots p, $$

$$ { {u ^ {l} } tilde } ( \epsilon ) = u ^ {l} + \epsilon \left . { \frac{\partial \Omega _ {G} ( \mathbf x, \mathbf u , \epsilon ) }{\partial \epsilon } } \right | _ {\epsilon = 0 } + {\mathcal O} ( \epsilon ^ {2} ) = $$

$$ = u ^ {l} + \epsilon \varphi _ {l} ( \mathbf x, \mathbf u ) + {\mathcal O} ( \epsilon ^ {2} ) , l = 1 \dots q, $$

where $ {\widetilde{\mathbf x} } ( 0 ) = \mathbf x $ and $ {\widetilde{\mathbf u} } ( 0 ) = \mathbf u $.

Therefore, in the method of prolonged vector fields, [a2], [a3], instead of considering the Lie group $ G $, one concentrates on its Lie algebra $ {\mathcal L} $, realized by vector fields of the form

$$ \tag{a6 } \alpha = \sum _ {i = 1 } ^ { p } \eta ^ {i} ( \mathbf x, \mathbf u ) { \frac \partial {\partial x _ {i} } } + \sum _ {l = 1 } ^ { q } \varphi _ {l} ( \mathbf x, \mathbf u ) { \frac \partial {\partial u ^ {l} } } . $$

To determine the coefficients $ \eta ^ {i} ( \mathbf x, \mathbf u ) $ and $ \varphi _ {l} ( \mathbf x, \mathbf u ) $ one has to construct the $ k $ th prolongation $ { \mathop{\rm pr} } ^ {( k ) } \alpha $ of the vector field $ \alpha $( cf. also Prolongation of solutions of differential equations), apply it to the system (a4), and make the resulting expression vanish on the solution set of (a4).

The result is a system of linear homogeneous PDEs for $ \eta ^ {i} $ and $ \varphi _ {l} , $ in which $ \mathbf x $ and $ \mathbf u $ are treated as independent variables. That system is called the determining or defining system for the symmetries. Solution of the system by hand, interactively or automatically with a symbolic package, yields the explicit forms of $ \eta ^ {i} ( \mathbf x, \mathbf u ) $ and $ \varphi _ {l} ( \mathbf x, \mathbf u ) $.

This sounds straightforward, but the method involves tedious calculations. In particular, the complexity of the expressions for the prolongations increases rapidly as the order $ k $ increases.

## Algorithm for Lie point symmetries.

The technical steps of the algorithm for the computation of Lie point symmetries are:

### Step 1.

Construct the $ k $ th prolongation of the vector field $ \alpha $ in (a6) by means of the formula

$$ \tag{a7 } { \mathop{\rm pr} } ^ {( k ) } \alpha = \alpha + \sum _ {l = 1 } ^ { q } \sum _ { \mathbf J } \psi _ {l} ^ {\mathbf J} ( \mathbf x, \mathbf u ^ {( k ) } ) { \frac \partial {\partial u _ {\mathbf J} ^ {l} } } , 1 \leq \left | \mathbf J \right | \leq k, $$

where the coefficients $ \psi _ {l} ^ {\mathbf J} $ are defined as follows. The coefficients of the first prolongation are:

$$ \tag{a8 } \psi _ {l} ^ {\mathbf J _ {i} } = D _ {i} \varphi _ {l} ( \mathbf x, \mathbf u ) - \sum _ {j = 1 } ^ { p } u _ {\mathbf J _ {j} } ^ {l} D _ {i} \eta ^ {j} ( \mathbf x, \mathbf u ) , $$

where $ \mathbf J _ {i} $ is a $ p $- tuple with $ 1 $ on the $ i $ th position and zeros elsewhere, and $ D _ {i} $ is the total derivative operator

$$ \tag{a9 } D _ {i} = { \frac \partial {\partial x _ {i} } } + \sum _ {l = 1 } ^ { q } \sum _ { \mathbf J } u _ {\mathbf J + \mathbf J _ {i} } ^ {l} { \frac \partial {\partial u _ {\mathbf J} ^ {l} } } , 0 \leq \left | \mathbf J \right | \leq k. $$

The higher-order prolongations are defined recursively as

$$ \tag{a10 } \psi _ {l} ^ {\mathbf J + \mathbf J _ {i} } = D _ {i} \psi _ {l} ^ {\mathbf J} - \sum _ {j = 1 } ^ { p } u _ {\mathbf J + \mathbf J _ {j} } ^ {l} D _ {i} \eta ^ {j} ( \mathbf x, \mathbf u ) , \left | \mathbf J \right | \geq 1. $$

### Step 2.

Apply the prolonged operator $ { \mathop{\rm pr} } ^ {( k ) } \alpha $ to each equation $ \Delta ^ {i} ( \mathbf x, \mathbf u ^ {( k ) } ) $ and require that

$$ \tag{a11 } { \mathop{\rm pr} } ^ {( k ) } \alpha \Delta ^ {i} \mid _ {\Delta ^ {j} = 0 } = 0 i,j = 1 \dots m. $$

Condition (a11) expresses that $ { \mathop{\rm pr} } ^ {( k ) } \alpha $ vanishes on the solution set of the system (a4). Precisely, this condition assures that $ \alpha $ is an infinitesimal symmetry generator of the group transformation $ {\widetilde{\mathbf x} } = \Lambda _ {G} ( \mathbf x, \mathbf u ) $, $ {\widetilde{\mathbf u} } = \Omega _ {G} ( \mathbf x, \mathbf u ) $. Hence, $ \mathbf u ( \mathbf x ) $ is a solution of (a4) whenever $ {\widetilde{\mathbf u} } ( {\widetilde{\mathbf x} } ) $ is one.

### Step 3.

Choose $ m $ components of the vector $ \mathbf u ^ {( k ) } $, say $ v ^ {1} \dots v ^ {m} $, such that:

i) each $ v ^ {i} $ is a derivative of some $ u ^ {l} $( $ l = 1 \dots q $) with respect to at least one variable $ x _ {i} $( $ i = 1 \dots p $);

ii) none of the $ v ^ {i} $ is the derivative of another one in the set;

iii) the system (a4) can be solved algebraically for the $ v ^ {i} $ in terms of the remaining components of $ \mathbf u ^ {( k ) } $, which are denoted by $ \mathbf w $; thus, $ v ^ {i} = S ^ {i} ( \mathbf x, \mathbf w ) $, $ i = 1 \dots m $;

iv) the derivatives of $ v ^ {i} $, $ v ^ {i} _ {\mathbf J} = D _ {\mathbf J} S ^ {i} ( \mathbf x, \mathbf w ) $, where $ D _ {\mathbf J} \equiv D _ {1} ^ {j _ {1} } \dots D _ {p} ^ {j _ {p} } $, can be expressed in terms of the components of $ \mathbf w $ and their derivatives, without ever re-introducing the $ v ^ {i} $ or their derivatives.

The requirements in Step 3 put some restrictions on the system (a4), but for many systems the choice of the appropriate $ v ^ {i} $ is quite obvious. For example, for a system of evolution equations (cf. Evolution equation)

$$ \tag{a12 } { \frac{\partial {u ^ {i} } }{\partial t } } ( x _ {1} \dots x _ {p - 1 } , t ) = F ^ {i} ( x _ {1} \dots x _ {p - 1 } , t, \mathbf u ^ {( k ) } ) , $$

$$ i = 1 \dots m, $$

where $ \mathbf u ^ {( k ) } $ involves derivatives with respect to the variables $ x _ {i} $ but not $ t $, an appropriate choice is

$$ v ^ {i} = { \frac{\partial u ^ {i} }{\partial t } } . $$

### Step 4.

Use $ v ^ {i} = S ^ {i} ( \mathbf x, \mathbf w ) $ to eliminate all $ v ^ {i} $ and their derivatives from the expression (a11), so that all the remaining variables are now independent of each other. It is tacitly assumed that the resulting expression is now a polynomial in the $ u _ {\mathbf J} ^ {l} $.

### Step 5.

Obtain the determining equations for $ \eta ^ {i} ( \mathbf x, \mathbf u ) $ and $ \varphi _ {l} ( \mathbf x, \mathbf u ) $ by equating to zero the coefficients of all functionally independent expressions (monomials) in the remaining derivatives $ u _ {\mathbf J} ^ {l} $.

In the above algorithm the variables $ x _ {i} $, $ u ^ {l} $ and $ u _ {\mathbf J} ^ {l} $ are treated as independent; the dependent ones are $ \eta ^ {i} $ and $ \varphi _ {l} . $

In summary: First, one generates the so-called determining or defining equations for the symmetries of the system. Secondly, one solves these by hand, interactively or automatically with a symbolic package, to determine the explicit forms of the $ \eta ^ {i} ( \mathbf x, \mathbf u ) $ and $ \varphi _ {l} ( \mathbf x, \mathbf u ) $.

From the Lie algebra of symmetry generators, one can obtain the Lie group of point transformations upon integration of a system of first-order characteristic equations. A detailed review of innovative ways of classifying, subsequently reducing, and finally solving overdetermined systems of linear homogeneous PDEs is given in [a1].

## Lie symmetry software.

To design a reliable and powerful integration algorithm for a system of determining equations the system needs to be brought into a standard form. Standard form procedures can be viewed as generalizations to systems of linear PDEs of the Gaussian reduction method (cf. Gauss method) for matrices or linear systems, except that integrability conditions are also added to the system. In essence, the standard (or involutive) form of a system of PDEs is an equivalent simplified ordered triangular system with all integrability conditions included and all redundancies (differential and algebraic) eliminated.

Customized, yet sophisticated symbolic code in MACSYMA, Maple, and REDUCE exists for that purpose. The algorithms of the major Lie symmetry packages have roots in the Riquier–Janet theory of differential equations (to transform a linear system of PDEs into involutive form). Modern implementations of "triangulation" algorithms use a differential version of the Gröbner basis algorithm for algebraic equations. Parenthetically, Lie's group theory for differential equations also mirrors Galois' theory for solving algebraic equations. The group of point transformations for an ODE in Lie theory plays the role of the permutation group of solutions in Galois theory. Both group structures provide insight in the existence and types of solutions.

Triangulation algorithms may be used to bypass the explicit integration of the determining equations and compute the dimension of the Lie symmetry group and the commutators immediately. Once systems are reduced to standard involutive form, subsequent integration is more tractable and reliable. One could use separation of variables, standard techniques for linear differential equations, and specific heuristic rules as given in [a1]. The only determining equations left for manual handling should be the "constraint" equations or any other equations whose general solutions cannot be written explicitly in closed form.

## Worked example.

To illustrate the computation of Lie point symmetries, consider a PDE due to H. Dym and M.D. Kruskal:

$$ \tag{a13 } u _ {t} - u ^ {3} u _ {xxx } = 0. $$

Clearly, this is a single equation with two independent variables, $ x _ {1} = x $, $ x _ {2} = t $, and one dependent variable, $ u ^ {1} = u $.

Symmetry software will automatically generate the determining equations for the coefficients $ \eta ^ {1} $, $ \eta ^ {2} $ and $ \varphi _ {1} $ of the vector field

$$ \alpha = \eta ^ {1} { \frac \partial {\partial x _ {1} } } + \eta ^ {2} { \frac \partial {\partial x _ {2} } } + \varphi _ {1} { \frac \partial {\partial u ^ {1} } } . $$

There are only eight determining equations:

$$ { \frac{\partial \eta ^ {2} }{\partial u ^ {1} } } = 0, { \frac{\partial \eta ^ {2} }{\partial x _ {1} } } = 0, { \frac{\partial \eta ^ {1} }{\partial u ^ {1} } } = 0, { \frac{\partial ^ {2} \varphi _ {1} }{\partial ( u ^ {1} ) ^ {2} } } = 0, $$

$$ { \frac{\partial ^ {2} \varphi _ {1} }{\partial u ^ {1} \partial x _ {1} } } - { \frac{\partial ^ {2} \eta ^ {1} }{\partial ( x _ {1} ) ^ {2} } } = 0, $$

$$ { \frac{\partial \varphi _ {1} }{\partial x _ {2} } } - ( u ^ {1} ) ^ {3} { \frac{\partial ^ {3} \varphi _ {1} }{\partial ( x _ {1} ) ^ {3} } } = 0, $$

$$ 3 ( u ^ {1} ) ^ {3} { \frac{\partial ^ {3} \varphi _ {1} }{\partial u ^ {1} \partial ( x _ {1} ) ^ {2} } } + { \frac{\partial \eta ^ {1} }{\partial x _ {2} } } - ( u ^ {1} ) ^ {3} { \frac{\partial ^ {3} \eta ^ {1} }{\partial ( x _ {1} ) ^ {3} } } = 0, $$

$$ u ^ {1} { \frac{\partial \eta ^ {2} }{\partial x _ {2} } } - 3 u ^ {1} { \frac{\partial \eta ^ {1} }{\partial x _ {1} } } + 3 \varphi _ {1} = 0. $$

Without intervention of the user, these determining equations are then solved explicitly. The general solution, rewritten in the original variables, is

$$ \eta ^ {1} = k _ {1} + k _ {3} x + k _ {5} x ^ {2} , $$

$$ \eta ^ {2} = k _ {2} - 3 k _ {4} t, $$

$$ \varphi _ {1} = ( k _ {3} + k _ {4} + 2 k _ {5} x ) u, $$

where $ k _ {1} \dots k _ {5} $ are arbitrary constants. The five infinitesimal generators are:

$$ G _ {1} = \partial _ {x} , $$

$$ G _ {2} = \partial _ {t} , $$

$$ G _ {3} = x \partial _ {x} + u \partial _ {u} , $$

$$ G _ {4} = - 3 t \partial _ {t} + u \partial _ {u} , $$

$$ G _ {5} = x ^ {2} \partial _ {x} + 2 x u \partial _ {u} . $$

Clearly, (a13) is invariant under translations ( $ G _ {1} $ and $ G _ {2} $) and scaling ( $ G _ {3} $ and $ G _ {4} $). The flow corresponding to each of the infinitesimal generators can be obtained via simple integration. As an example, the flow corresponding to $ G _ {5} $ is computed. This requires integration of the first-order system

$$ { \frac{d {\widetilde{x} } }{d \epsilon } } = { {\widetilde{x} } } ^ {2} , { {\widetilde{x} } } ( 0 ) = x, $$

$$ { \frac{d {\widetilde{t} } }{d \epsilon } } = 0, { {\widetilde{t} } } ( 0 ) = t, $$

$$ { \frac{d {\widetilde{u} } }{d \epsilon } } = 2 {\widetilde{x} } {\widetilde{u} } , { {\widetilde{u} } } ( 0 ) = u, $$

where $ \epsilon $ is the parameter of the transformation group. One readily obtains

$$ {\widetilde{x} } ( \epsilon ) = { \frac{x}{( 1 - \epsilon x ) } } , {\widetilde{t} } ( \epsilon ) = t, {\widetilde{u} } ( \epsilon ) = { \frac{u}{( 1 - \epsilon x ) ^ {2} } } . $$

Therefore, one concludes that for any solution $ u = f ( x,t ) $ of (a13) the transformed solution

$$ {\widetilde{u} } ( {\widetilde{x} } , {\widetilde{t} } ) = ( 1 + \epsilon {\widetilde{x} } ) ^ {2} f ( { \frac{ {\widetilde{x} } }{1 + \epsilon {\widetilde{x} } } } , {\widetilde{t} } ) $$

will solve

$$ {\widetilde{u} } _ { {\widetilde{t} } } - { {\widetilde{u} } } ^ {3} { {\widetilde{u} } } _ { {\widetilde{x} } {\widetilde{x} } {\widetilde{x} } } = 0. $$

## Beyond Lie point symmetries.

For the computation of generalized symmetries or Lie–Bäcklund symmetries [a2], [a3], the use of symbolic programs is even more appropriate, since the calculations are lengthier and more time consuming. In a generalized vector field, which still takes the form of (a6), the functions $ \eta ^ {i} $ and $ \varphi _ {l} $ may now depend on a finite number of derivatives of $ \mathbf u $.

Lie symmetry packages have proven to be an effective tool in solving overdetermined systems of linear and non-linear PDEs in the study of various Lie symmetries. Yet, no general algorithm is available to integrate an arbitrary (overdetermined) system of determining equations that consists of linear homogeneous PDEs for the components of $ \pmb\eta $ and $ \pmb\varphi $. Most computer programs still use some heuristic rules for the integration of the determining system.

The availability of sophisticated symbolic programs for Lie symmetry computations certainly will accelerate the study of symmetries of physically important systems of differential equations in classical mechanics, fluid dynamics, elasticity, and other applied areas.

#### References

[a1] | W. Hereman, "Symbolic software for Lie symmetry analysis" N.H. Ibragimov (ed.) , CRC Handbook of Lie Group Analysis of Differential Equations: New Trends in Theoretical Developments and Computational Methods , 3 , CRC (1996) pp. Chapt. 13; 367–413 |

[a2] | P.J. Olver, "Applications of Lie groups to differential equations" , GTM , 107 , Springer (1993) (Edition: Second) |

[a3] | H. Stephani, "Differential equations: their solution using symmetries" , Cambridge Univ. Press (1989) |

**How to Cite This Entry:**

Lie symmetry analysis.

*Encyclopedia of Mathematics.*URL: http://encyclopediaofmath.org/index.php?title=Lie_symmetry_analysis&oldid=47632