# Differential games

2010 Mathematics Subject Classification: Primary: 49N70 Secondary: 91A23 [MSN][ZBL]

A branch of the mathematical theory of control (cf. Automatic control theory), the subject of which is control in conflict situations. The theory of differential games is also related to the general theory of games (cf. Games, theory of). The first studies in this theory appeared in the mid-1950s.

## Formulations of problems in the theory of differential games.

One distinguishes between games played by two and those played by several players. Basic results have been obtained for the former case. Such problems may be described by the following scheme. There is a dynamical system in which part of the control is subjected to a player I, while a second part is subjected to a player II. In formulating the task facing player I or player II it is assumed that the selection of controls by this player, which will enable him to attain a certain objective in the face of any previously unknown control by his opponent, may only be based on a certain amount of information about the current state of the system. The theory of differential games also studies control problems under uncertain conditions, when disturbances acting on the system are considered as moves of the opponent. For example, the formulation of the task of player I may be described as follows. It is usually assumed that the motion of the controlled system is defined by a differential equation

$$\tag{1 } \dot{x} = f ( t , x , u , v ) ,$$

where $x$ is the phase vector (state) of the system, and $u$ and $v$ are the control vectors of the players I and II respectively. One defines the class of strategies ${\mathcal U}$ of player I, and one also defines, for each strategy $U \in {\mathcal U}$, a bundle of trajectories $X ( U)$ generated by this strategy in the face of all possible control moves of the opponent, and starting at the initial state of the system (1). These concepts are selected so as to correspond to the specified restrictions on the moves of the players and on the nature of the information on the current state of the system facing player I. Trajectories $x ( t)$, $t \geq t _ {0}$, of (1) determine the value of a functional $\gamma ( x ( \cdot ) )$( the cost or the pay-off of the game), the value of which player I is attempting to minimize ( $\gamma$ may also depend on the realization of $u ( t)$, $v ( t)$, $t \geq t _ {0}$, by moves by the players). Considering the least favourable realization of the move $x ( \cdot ) \in X ( U)$, the choice of which in the problem is left to the opponent, the quality of the strategies $U \in {\mathcal U}$ is evaluated by the magnitude

$$\kappa _ {1} ( U ) = \sup \{ {\gamma ( x ( \cdot ) ) } : {x ( \cdot ) \in X ( U ) } \} .$$

The problem of player I is to find the strategy $U _ {0} \in {\mathcal U}$ for which the minimum of the functional $\kappa _ {1}$ is attained (this problem is called the problem of degrees). Occasionally the so-called problem of quality is considered; here the objective is to determine the existence conditions of a strategy $U _ {c} \in {\mathcal U}$ corresponding to the inequality $\kappa _ {1} ( U _ {c} ) \leq c$, where $c$ is some given number.

The problems facing player II, who maximizes the cost of the game, are formulated in a similar manner. The strategies $V \in {\mathcal V}$ of player II are evaluated by the magnitude

$$\kappa _ {2} ( V) = \inf \{ {\gamma ( x ( \cdot ) ) } : {x ( \cdot ) \in X ( V ) } \} .$$

Here, the problem of degree is the selection of the strategy $V _ {0} \in {\mathcal V}$ which maximizes the value of the functional $\kappa _ {2}$, while the problem of quality is to find the conditions under which $\kappa _ {2} ( V _ {c} ) \geq c$ for some strategy $V _ {c} \in {\mathcal V}$.

If the classes of strategies ${\mathcal U}$ and ${\mathcal V}$ in the problems of the players I and II are such that for any pair $( U , V ) \in {\mathcal U} \times {\mathcal V}$ it is possible to determine at least one trajectory

$$x ( \cdot ) \in X ( U ) \cap X ( V )$$

generated by this pair, one says that these two problems constitute a differential game defined on the class of strategies ${\mathcal U} \times {\mathcal V}$. If the equality

$$\inf _ {U \in {\mathcal U} } \sup _ {x ( \cdot ) \in X ( U ) } \gamma ( x ( \cdot ) ) =$$

$$= \ \sup _ {V \in {\mathcal V} } \inf _ {x ( \cdot ) \in X ( V ) } \gamma ( x ( \cdot ) ) = c _ {0}$$

is satisfied in the differential game, $c _ {0}$ is said to be the value of the differential game.

A typical example of a differential game is the pursuit and evasion game. In this game $x = ( x _ {1} \dots x _ {k+} l ) = ( y _ {1} \dots y _ {k} , z _ {1} \dots z _ {l} )$, where $y$ and $z$ are the phase vectors of the pursuer and the pursued, respectively, whose motions are described by the equations

$$\dot{y} = g ( t , y , u ) ,\ \dot{z} = h ( t , z , v ) .$$

In the case most often considered the selection of the controls is restricted by limitations of the type

$$\tag{2 } u \in P ,\ v \in Q ,$$

where $P$ and $Q$ are certain compact sets. In this game the cost is the time prior to contact, i.e.

$$\gamma ( x ( \cdot ) ) = T ( x ( \cdot ) ) =$$

$$= \ \inf \{ t - t _ {0} : \ \| \{ y ( t ) \} _ {m} - \{ z ( t ) \} _ {m} \| \leq \epsilon \} ,$$

where $\{ y \} _ {m}$ and $\{ z \} _ {m}$ are vectors composed of the first $m$ components of $y$ and $z$. Thus, when the distance between $\{ y ( t) \} _ {m}$ and $\{ z ( t) \} _ {m}$ has reached a given value $\epsilon$, it is considered that contact has been established. If the players have available information on the current position of the game $( t , x ( t))$, i.e. in the positional game of pursuit and evasion, the cost of the game exists.

## Formalization of differential games.

For a mathematical formalization of differential games, the concepts discussed above must be rigorously defined. The main subject studied in the theory of differential games are problems in which the position of the game is known to the players, while the moves are restricted as in (2). It is then natural to define the strategies of the players as functions $u = u ( t , x )$, $v = v ( t , x )$ with values in compact sets $P$ and $Q$, respectively. It was found, however, that if this approach is adopted, discontinuous strategies often have to be considered, while the moves generated by these strategies cannot be defined in terms of known concepts in the theory of differential games. Below formalizations in which positional strategies are not used are described. Subsequently a formalization of positional differential games is given which comprises discontinuous positional strategies and which is based on a special definition of moves.

Of the many trends in the theory of differential games, the first one to be mentioned is a group of studies (see, for instance, , ), initiated by a paper by W. Fleming  and culminating in . They deal with the approximation of a differential game by multi-step games, where the players select their moves sequentially (in steps) in given periods of time $[ t _ {i} , t _ {i+} 1 )$, $i = 0 \dots N$. Attention is focused on the player which is the first to select his move each time, and announces his choice to his opponent. Depending on whether this player minimizes or maximizes the cost of the game, one distinguishes between majorant and minorant multi-step games. The adoption of this approach is reduced to proving the existence of the value of the differential game, which is here defined as the common value to which the values of the majorant and minorant games converge as the subdivision $[ t _ {i} , t _ {i+} 1 )$, $i = 0 \dots N$, is refined (as the number of steps is increased). However, the construction of positional strategies which would be independent of the discrete time is usually ignored in this approach.

L.S. Pontryagin posed a formulation of control problems in games (see, for instance, , , , ) which allows for informational discrimination of the opponent, i.e. it is assumed in formulating the problem of player I (or player II) that this player is aware not only of the game position $( t , x ( t))$ being realized, but also of the move of his opponent $v ( \tau )$( or $u ( \tau )$) which will be selected during the interval $[ t , t + \delta ]$, where $\delta$ is a small positive number. In this way the course of the game can be conveniently described, which in turn makes it possible to construct a rigorous mathematical theory for a large number of problems involving control under conditions of conflict. However, the introduction of informational discrimination gives player I informational advantage over his opponent as well as imposing restrictions on the behaviour of the latter; in particular, the opponent is unable to formulate his moves according to the principle of reverse connection

$$v [ t] = v ( t , x ( t) )$$

(or $u [ t] = u ( t , x ( t) )$). On passing to the interesting treatment of the results obtained under conditions of discrimination of the opponent, it is possible to use information on the move realized during the interval $[ t - \delta , t )$ by the opponent rather than on his move during the interval $[ t , t + \delta )$. Such an exchange can be readily justified for systems where

$$f ( t , x , u , v ) = f _ {1} ( t , x , u ) + f _ {2} ( t , x , v ) .$$

The practical applications of the fundamental theoretical results concerning conditions of discrimination of the opponent are obtained as a result of an appropriate treatment of positional differential games (see the description of a control procedure with a guide, which is given below).

The formalization of positional differential games was developed by N.N. Krasovskii et al. . Here, the subject of study are the positional strategies $U$ and $V$, i.e. the functions $u = u ( t , x )$ and $v = v ( t , x )$, the values of which are comprised in compact sets $P$ and $Q$ respectively. No continuity conditions are assumed with respect to these functions. The moves $x ( t , t _ {0} , x _ {0} , U )$( $t \geq t _ {0}$, $x ( t _ {0} ) = x _ {0}$) generated by the strategy $U \div u ( t , x )$ of player I are defined as the limits, uniform on any interval $[ t _ {0} , T ]$, of sequences of approximating moves $x _ {k} ( t)$, $k= 1 , 2 \dots$ which are absolutely-continuous functions satisfying the equation

$$\tag{3 } \dot{x} _ {k} ( t) = f ( t , x _ {k} ( t) , u _ {k} [ t] , v _ {k} [ t] ) ,$$

where $v _ {k} [ t] \in Q$( $t \geq t _ {0}$) are arbitrary measurable functions,

$$u _ {k} [ t] = u ( t _ {i} ^ {( k) } , x _ {k} ( t _ {i} ^ {( k) } ) ) , \ t _ {i} ^ {( k) } \leq t < t _ {i+} 1 ^ {( k) } ,\ i = 0 , 1 \dots$$

and

$$\sup _ { i } ( t _ {i+} 1 ^ {( k) } - t _ {i} ^ {( k) } ) \rightarrow \ 0 \ \textrm{ as } k \rightarrow \infty .$$

The moves $x ( t , t _ {0} , x _ {0} , V )$( $t \geq t _ {0}$, $x ( t _ {0} ) = x _ {0}$) are introduced in a similar manner. If the strategies and the moves are thus defined, the following situation holds.

Alternative 1. Let the equality

$$\tag{4 } \min _ {u \in P } \max _ {v \in Q } s ^ \prime f ( t , x , u , v ) = \max _ {v \in Q } \min _ {u \in P } s ^ \prime f ( t , x , u , v ) ,$$

where $s ^ \prime f$ is the scalar product of the vectors $s$ and $f$, be valid for arbitrary vectors $( t , x) \in \mathbf R ^ {n+} 1$ and $s \in \mathbf R ^ {n}$. Then, for any closed sets $M _ {c} \subset \mathbf R ^ {n+} 1$ and $N _ {c} \in \mathbf R ^ {n+} 1$, any initial position $( t _ {0} , x _ {0} )$ and any moment $\theta \geq t _ {0}$, one of the following two statements is always true. Either there exists a strategy $U _ {*}$ such that, for any move $x ( t) = x ( t , t _ {0} , x _ {0} , U _ {*} )$, the point $( t , x ( t) )$ falls in $M _ {c}$ at the moment $t = \theta$ and remains in $N _ {c}$ up to the moment of contact with $M _ {c}$; or else there exists a strategy $V _ {*}$ which ensures, for any move $x ( t) = x ( t , t _ {0} , x _ {0} , V _ {*} )$, that the point $( t , x ( t) )$ fails to fall in $M _ {c}$ if $t _ {0} \leq t \leq \theta$, or that the phase restriction $( t , x ( t)) \in N _ {c}$ is violated prior to the point $( t , x ( t) )$ falling in $M _ {c}$.

The study of many types of differential games for which the existence of the value of the games follows from alternative 1 is reduced to solving such an approach-evasion game. In proposing positional differential games, other definitions of a strategy and of a move are also possible. Thus, in dealing with discontinuous strategies, it may be convenient to replace the discontinuous right-hand side of the differential equations by a multi-valued one and use the apparatus of differential equations in contingencies, or else to approximate discontinuous strategies . However, such attempts may be unsuccessful. Examples are known where the optimal solution of a differential game, obtained by such a formalization, cannot be obtained or even approximated in the framework of other formalizations based on equations in contingencies or on continuous strategies.

If condition (4) is violated, one distinguishes between a formulation of a differential game in the class of strategies of one player and of counter-strategies of the other player, to which corresponds a deterministic solution of the differential game, and a formulation of a differential game in the class of mixed strategies of both players, the content of which is revealed on approximating the mixed strategies by corresponding stochastic control procedures. Counter-strategies $U _ {v} , V _ {u}$ are identified with functions $u = u ( t , x , v ) \in P$, $v = v ( t , x , u ) \in Q$ which are Borel functions in the variables $v$ and $u$ respectively. The moves $x ( t , t _ {0} , x _ {0} , U _ {v} )$( $t \geq t _ {0}$, $x ( t _ {0} ) = x _ {0}$) are defined as the limits of the approximate moves $x _ {k} ( t)$( 3), where

$$u _ {k} [ t] = u ( t _ {i} ^ {( k) } , x _ {k} ( t _ {i} ^ {( k) } ) , v _ {k} [ t] ) ,$$

$$t _ {i} ^ {( k) } \leq t < t _ {i+} 1 ^ {( k) } ,\ i = 0 , 1 \dots$$

the moves $x ( t , t _ {0} , x _ {0} , V _ {u} )$ are defined in a similar manner. Mixed strategies $\widetilde{U}$, $\widetilde{V}$ are identified with functions $\mu = \mu ( t , x )$, $\nu = \nu ( t , x )$ the values of which are probability measures defined on compact sets $P$ and $Q$ respectively. The moves $x ( t , t _ {0} , x _ {0} , \widetilde{U} )$( $t \geq t _ {0}$, $x ( t _ {0} ) = x _ {0}$) are defined as the limits of the approximating moves $x _ {k} ( t)$( $t \geq t _ {0}$) satisfying the equation

$$\dot{x} _ {k} ( t) = \int\limits _ { P } \int\limits _ { Q } f ( t , x _ {k} ( t) , u , v ) d \mu _ {k} [ t] d v _ {k} [ t] ,$$

where

$$\mu _ {k} [ t] = \mu ( t _ {i} ^ {( k) } , x _ {k} ( t _ {i} ^ {( k) } )) ,\ t _ {i} ^ {(} k) \leq t < t _ {i+} 1 ^ {( k) } ,\ i = 0 , 1 \dots$$

and where $\nu _ {k} [ t]$ is some weakly measurable function. The moves $x ( t , t _ {0} , x _ {0} , \widetilde{V} )$ are introduced in a similar manner.

Alternative 2 (alternative 3) is always valid for differential games of the approach-evasion type. It is formulated from alternative 1 by replacing the strategy of one of the players by the corresponding counter-strategy (by replacing, respectively, the strategies $U _ {*}$ and $V _ {*}$ by mixed strategies $\widetilde{U} _ {*}$ and $\widetilde{V} _ {*}$). For differential games reducible to an approach-evasion game and considered within the class of positional strategies of one of the players and counter-strategies of his opponent (in the class of mixed strategies $\{ \widetilde{U} \}$ and $\{ \widetilde{V} \}$), the existence of the value of the differential game is proved without assuming condition (4). Solutions of differential games in these strategy classes may prove to be unstable with respect to informational disturbances. The procedure of control with a guide  is introduced to stabilize these solutions. Here, an etalon system (a guide) is considered along with the real system, and the motion of the former system is modelled in a computer and is known to any required degree of accuracy. The control by the player in the real system and the moves of the guide are so formed that the movements of the initial and of the modelled systems mutually follow each other, the guide being kept on some bridge connecting the initial position with the target set. Such a control procedure proves to be stable to experimental errors and to disturbances acting on the system. In modelling the motion of the guide, constructions allowing for discrimination against the opponent can be utilized.

## Methods of the theory of differential games.

One method for solving differential games was proposed by R. Isaacs . This approach is related to the method of dynamic programming and is based on the integration of a special partial differential equation, the solution of which determines the value $c _ {0} ( t , x )$ of the game as a function of the initial position of the game. The optimal strategies of player I (or II) are chosen so as to ensure a non-increasing (non-decreasing) value of the game along the movement of the system they generate. However, the value of the game is often a discontinuous function of the position of the game, so that the adoption of this approach involves a special investigation of solutions near the discontinuity surface of the function $c _ {0} ( t , x )$ or of its partial derivatives. A complete study of the singularities of this method and its justification proves to be very difficult, and is possible only in a few isolated cases.

Linear differential games, i.e. the case

$$f ( t , x , u , v ) = A ( t) x + B ( t) u + C ( t) v ,$$

where $A$, $B$ and $C$ are continuous matrices of corresponding dimensions, have been most thoroughly studied. For the linear problems of pursuit and evasion, Krasovskii formulated the rule of extremal aiming , which served to solve such problems (with the exception of those involving discrimination of the opponent) subject to a condition of regularity and, in particular, in the case of single-type objects. The elements of this solution may be described as follows. Let $W ( t _ {*} , \theta , G )$ be the set of programmed absorption, i.e. the totality of points $x _ {*} \in \mathbf R ^ {n}$ for which any control $v ( t) \in Q$ corresponds to a control $u ( t) \in P$, $t _ {*} \leq t \leq \theta$, such that the pair of these controls converts the system from the state $x ( t _ {*} ) = x _ {*}$ to $G \subset \mathbf R ^ {n}$ at the moment $t = \theta$. One introduces the function

$$\epsilon _ {0} ( t , x , \theta ) = \inf \{ \epsilon : {x \in W ( t , \theta , M _ \epsilon ) } \} ,$$

where $M _ \epsilon$ is a Euclidean $\epsilon$- neighbourhood of the target set $M$. In the domain $\Gamma _ \theta = \{ {( t , x ) } : {\epsilon _ {0} ( t , x , \theta ) > 0 } \}$ the quantity $\epsilon _ {0} ( t , x , \theta )$ is given by the relation:

$$\tag{5 } \epsilon _ {0} ( t , x , \theta ) = \max _ { l } \kappa ( l , t , x , \theta ) ,\ l \in \mathbf R ^ {n} ,\ \| l \| \leq 1 ,$$

where the function $\kappa$ can be simply expressed in terms of the supporting functions of the sets $P$, $Q$ and the convex set $M$. If $\kappa$ is convex with respect to the variable $l$( a strong regularity condition) in the domain $\Gamma _ \theta$, the maximum in (5) is attained on a unique vector $l _ {0} ( t , x , \theta )$, selected as the boundary value condition in the maximum principle which determines the selection of the extremal control $u _ {0}$. The strategy $U _ {0} \div u _ {0} ( t , x )$ set up in this way ensures that $M$ is hit at the moment $t = \theta$ from any position $( t _ {0} , x _ {0} )$, where $\epsilon _ {0} ( t _ {0} , x _ {0} , \theta ) = 0$. These constructions are based on the information on the position of the game alone, and can be effectively performed in a computer. Approach problems may also be solved with the aid of programmed construction under weakened regularity conditions.

Linear pursuit problems under different regularity assumptions were studied by E.F. Mishchenko, Pontryagin and B.N. Pshenichnyi , ; see, in particular,  for the solution of a linear pursuit problem under conditions of so-called local convexity, for which the above strong regularity condition applies.

One of the most convenient methods for solving control problems in games is the direct method. In problems in which this method is applicable the control of the player is chosen so that, for any counter-move on the part of the opponent, a deterioration of some auxiliary process, leading to a successful termination of the game, takes place. If the objects are of one type ( $C = - B$, $Q = \lambda P$, $0 \leq \lambda \leq 1$) and under conditions of discrimination of the opponent, the direct method for solving the pursuit problem determines the choice of the control by player I as a sum of two terms, one of which imitates the control of the opponent, while the other is identical with the solution of the programmed problem of a rapid conversion of the system

$$\dot{x} = A ( t) x + B ( t) w ,\ w \in ( 1 - \lambda ) P ,$$

into the target set $M$. Pontryagin  was the first to describe the direct method for the linear pursuit problem. Subsequently, conditions under which the direct method gives optimal results were found . The direct method was then developed for solving non-linear problems, and for problems with integral restrictions , . In all these studies the direct method was employed under conditions of discrimination of the opponent; it was also developed for solving positional games .

The evasion problem. The purpose of this problem is to determine conditions under which the object being pursued can evade contact with the pursuer at all $t \geq t _ {0}$. The study of this problem was initiated by Pontryagin and Mishchenko ,  who found conditions of solvability of the linear evasion problem and estimated the least distance between the pursuing and the pursued points. This approach was subsequently extended to other types of evasion problems.

The concept of an alternating integral, proposed by Pontryagin , made it possible to describe the structure of the set of initial positions from which it is possible to terminate the initial pursuit game at the given moment of time $t = \theta$. An alternating integral is defined as the limit of a recurrent procedure in which the initial set $A _ {0}$ coincides with the target set $M \subset \mathbf R ^ {n}$, while each subsequent set $A _ {i+} 1$ is determined by the preceding one by the operation of programmed absorption, i.e. $A _ {i+} 1 = W ( \tau _ {i+} 1 , \tau _ {i} , A _ {i} )$, where $\tau _ {i+} 1 = \tau _ {i} - \Delta$, $\tau _ {0} = \theta$, and $\Delta > 0$ is the step of the recurrent procedure. Pshenichnyi  also used programmed absorption as an elementary operation in the investigation of the structure of differential pursuit games; as distinct from the preceding case, here the only requirement was that the duration of transition from points $x \in A _ {i+} 1$ to $A _ {i}$ does not exceed a number $\Delta$, but may, in general, differ from that number. Such a recurrent procedure permits one, in the general case of a non-linear system, to describe the structure of the set of positions from which the pursuit game may be terminated at a given moment of time.

An extremal construction for the solution of positional differential games (see, for example, ) has also been proposed. This approach is employed both for solving definite examples and in proving general statements, in particular, the alternatives 1–3 above. For instance, in solving approach problems with $M _ {c} \subset \mathbf R ^ {n+} 1$ in the class of strategies $U \div u ( t , x )$ subject to condition (4), according to the extremal construction a set $W _ {u} \subset \mathbf R ^ {n+} 1$ is selected in the space of positions; this set forms a bridge connecting the initial position with the target set $M _ {c}$, and it is totally comprised in the set $N _ {c}$. The bridge has the so-called property of $u$- stability, i.e. for any $( t _ {*} , x _ {*} ) \subset W _ {u}$, $v _ {*} \in Q$ and $t ^ {*} \geq t _ {*}$ there exists a solution $x ( t)$ of the equation in contingencies

$$\dot{x} ( t) \in \omega \{ {f ( t , x ( t) , u , v _ {*} ) } : {u \in P } \} ,$$

$$x ( t _ {*} ) = x _ {*} ,$$

for which either $( t ^ {*} , x ( t ^ {*} ) ) \in W _ {u}$ or $( \tau , x ( \tau ) ) \in M _ {c}$ for a certain $\tau \in [ t _ {*} , t ^ {*} ]$. One introduces the strategy $U ^ {(} e) \div u ^ {(} e) ( t , x)$ extremal to the bridge $W _ {u}$; it is defined by the relation

$$\min _ {u \in P } \max _ {v \in Q } ( x - w ) ^ \prime f ( t , x , u , v ) =$$

$$= \ \max _ {v \in Q } ( x - w ) ^ \prime f ( t , x , u ^ {(} e) ( t , x ) , v ) ,$$

where $w$ is a vector for which $( t , w)$ is the point of $W _ {u}$ nearest to the position $( t , x)$. The extremal strategy retains the moves on the bridge $W _ {u}$ and thus supplies the solution of the approach problem.

The fundamental moment in this construction is the determination of suitable stable bridges, after which the construction of extremal strategies does not present any special difficulties. The study of the familiar recurrent procedures for the determination of such bridges is limited by major computational difficulties, so that the study of effective methods for the construction of stable bridges is important. The direct method is one of them. Another one is the construction of stable bridges in the form of regular programmed absorption sets. In the non-linear case programmed absorption is defined with the aid of special control measures (see, e.g., ). If the programmed absorption is regular, solving conflicting control problems may be reduced to computer-realizable algorithms.

## Main research trends in differential games.

The results exposed above mainly concerned differential games in which the control could be described by the ordinary differential equation (1) with control vectors subject to the restriction (2). Similar results were obtained for game problems in control in which the moves are described by ordinary differential equations with deviating arguments (cf. Differential equations, ordinary, with distributed arguments), and also for problems involving integral restrictions on the controls of the players .

In the study of the structure of differential games, problems of conflicting control, where the moves are determined by a generalized dynamical system (see, for example, , ) are of interest. A study was made of differential games in the class of quasi-strategies, which yield the control of the player in response to the control by his opponent; iteration procedures have also been proposed to determine the value function and stable bridges .

Not only differential games with two players, but also $N$- person differential games are studied. In the formulation of $N$- person differential games it is usually assumed that, in choosing his control, each attempts to minimize a certain functional defined on the trajectories of the controlled system. The players may only employ information about the current positions of the game. For instance, a typical problems is to determine the conditions under which such games have a Nash equilibrium point in the given class of player strategies.

Game problems of control with incomplete information are an important part of the theory of differential games. It is assumed in such problems that the lack of completeness of the information consists in the ignorance of some of the components of the phase vector $x ( t)$ or in the measurement of the current position of the game with a certain lag, or else in an inaccurate determination of the location of the phase point $x ( t)$; a case is possible where the permissible range is the only indication of the measurement error, and also a case in which some statistical description of this error is given.

How to Cite This Entry:
Differential games. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Differential_games&oldid=51022
This article was adapted from an original article by M.S. Nikol'skiiA.I. Subbotin (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article