Difference between revisions of "Dynamic game"
(Importing text file) |
(latex details) |
||
(2 intermediate revisions by 2 users not shown) | |||
Line 1: | Line 1: | ||
− | + | <!-- | |
+ | d0342501.png | ||
+ | $#A+1 = 43 n = 0 | ||
+ | $#C+1 = 43 : ~/encyclopedia/old_files/data/D034/D.0304250 Dynamic game | ||
+ | Automatically converted into TeX, above some diagnostics. | ||
+ | Please remove this comment and the {{TEX|auto}} line below, | ||
+ | if TeX found to be correct. | ||
+ | --> | ||
− | + | {{TEX|auto}} | |
+ | {{TEX|done}} | ||
− | + | A variant of a [[Positional game|positional game]] distinguished by the fact that in such a game the players control the "motion of a point" in the state space $ X $. | |
+ | Let $ I = \{ i \} $ | ||
+ | be the set of players. To each point $ x \in X $ | ||
+ | corresponds a set $ S _ {i} ^ {(x)} $ | ||
+ | of elementary strategies of player $ i \in I $ | ||
+ | at this point, and hence, also, the set $ S ^ {(x)} = \prod _ {i} S _ {i} ^ {(x)} $ | ||
+ | of elementary situations at $ x $. | ||
+ | The periodic distribution functions | ||
− | + | $$ | |
+ | F ( x _ {k} \mid x _ {1} , s ^ {( x _ {1} ) } \dots x _ {k - 1 } , s ^ {( x _ {k-1} ) } ) ,\ x _ {i} \in X ,\ s ^ | ||
+ | {( x _ {i} ) } \in S ^ {( x _ {i} ) } , | ||
+ | $$ | ||
− | + | representing the law of motion of the controlled point, which is known to all players, is defined on $ X $. | |
+ | If $ x _ {k} $ | ||
+ | is fixed, the function $ F $ | ||
+ | is measurable with respect to all the remaining arguments. A sequence $ P $ | ||
+ | of successive states and elementary situations $ x _ {1} , s ^ {( x _ {1} ) } \dots x _ {k} , s ^ {( x _ {k} ) } \dots $ | ||
+ | is a play of a general dynamic game. It is inductively defined as follows: Let there be given a segment of the play (an opening) $ x _ {1} , s ^ {( x _ {1} ) } \dots x _ {k-1} $( | ||
+ | $ k \geq 2 $), | ||
+ | and let each player $ i $ | ||
+ | choose his elementary strategy $ s _ {i} ^ {( x _ {k-1} ) } \in S _ {i} ^ {( x _ {k-1} ) } $ | ||
+ | so that the elementary situation $ s ^ {( x _ {k-1} ) } $ | ||
+ | arises; the game then continues, at random, in accordance with the distribution $ F ( \cdot \mid x _ {1} , s ^ {( x _ {1} ) } \dots x _ {k-1} , s ^ {( x _ {k-1}) } ) $, | ||
+ | into the state $ x _ {k} $. | ||
+ | In each play $ P $ | ||
+ | the pay-off $ h _ {i} ( P) $ | ||
+ | of player $ i $ | ||
+ | is defined. If the set of all plays is denoted by $ \mathfrak P $, | ||
+ | the dynamic game is specified by the system | ||
− | + | $$ | |
+ | \Gamma = < I , X , \{ S _ {i} ^ {(x)} \} _ {i \in I , | ||
+ | x \in X } , F , \{ h _ {i} ( P) \} _ {i \in I , P \in | ||
+ | \mathfrak P } > . | ||
+ | $$ | ||
− | In | + | In a dynamic game it is usually assumed that, at the successive moments of selection of an elementary strategy, the players know the preceding opening. In such a case a pure strategy $ s _ {i} $ |
+ | of player $ i $ | ||
+ | is a selection of functions $ s _ {i} ^ {( x) } ( x _ {1} , s ^ {( x _ {1} ) } \dots s ^ {( x _ {k-1} ) } , x ) $ | ||
+ | which put the opening ending in $ x $ | ||
+ | into correspondence with the elementary strategy $ s _ {i} ^ {(x)} \in S _ {i} ^ {(x)} $. | ||
+ | Dynamic games in which the preceding opening is only known partly to the players — e.g. games with "information lag" — have also been studied. | ||
− | + | For a game to be specified, each situation $ s = \{ s _ {i} \} $ | |
+ | must induce a probability measure $ \mu _ {s} $ | ||
+ | on the set of all plays, and the mathematical expectation $ {\mathsf E} h _ {i} ( P) $ | ||
+ | with respect to the measure $ \mu _ {s} $ | ||
+ | must exist. This mathematical expectation is also the pay-off of player $ i $ | ||
+ | in situation $ s $. | ||
− | Stochastic games, recursive games and survival games are special classes of dynamic games (cf. also [[Stochastic game|Stochastic game]]; [[Recursive game|Recursive game]]; [[ | + | In general, the functions $ h _ {i} ( P) $ |
+ | are arbitrary, but the most frequently studied dynamic games are those with terminal pay-off (the game is terminated as soon as $ x _ {k} $ | ||
+ | appears in a terminal set $ X ^ {T} \subset X $, | ||
+ | and $ h _ {i} ( P) = h _ {i} ( x _ {k} ) $ | ||
+ | where $ x _ {k} $ | ||
+ | is the last situation in the game), and those with integral pay-off ( $ h _ {i} ( P) = \sum _ {k= 1 } ^ \infty h _ {i} ( x _ {k} , s ^ {( x _ {k} ) }) $). | ||
+ | |||
+ | Dynamic games are regarded as the game-like variant of a problem of optimal control with discrete time. It is in fact reduced to such a problem if the number of players is one. If, in a dynamic game, $ X \subset \mathbf R ^ {n} $, | ||
+ | continuous time is substituted for discrete time and the random factors are eliminated, a differential game is obtained, which may thus be regarded as a variant of a dynamic game (see also [[Differential games|Differential games]]). | ||
+ | |||
+ | Stochastic games, recursive games and survival games are special classes of dynamic games (cf. also [[Stochastic game|Stochastic game]]; [[Recursive game|Recursive game]]; [[Game of survival]]). | ||
====References==== | ====References==== | ||
− | <table><TR><TD valign="top">[1]</TD> <TD valign="top"> N.N. Vorob'ev, "The present state of the theory of games" ''Russian Math. Surveys'' , '''25''' : 2 (1970) pp. 77–136 ''Uspekhi Mat. Nauk'' , '''25''' : 2 (1970) pp. 81–140</TD></TR></table> | + | <table> |
+ | <TR><TD valign="top">[1]</TD> <TD valign="top"> N.N. Vorob'ev, "The present state of the theory of games" ''Russian Math. Surveys'' , '''25''' : 2 (1970) pp. 77–136 ''Uspekhi Mat. Nauk'' , '''25''' : 2 (1970) pp. 81–140</TD></TR> | ||
+ | </table> |
Latest revision as of 19:39, 16 January 2024
A variant of a positional game distinguished by the fact that in such a game the players control the "motion of a point" in the state space $ X $.
Let $ I = \{ i \} $
be the set of players. To each point $ x \in X $
corresponds a set $ S _ {i} ^ {(x)} $
of elementary strategies of player $ i \in I $
at this point, and hence, also, the set $ S ^ {(x)} = \prod _ {i} S _ {i} ^ {(x)} $
of elementary situations at $ x $.
The periodic distribution functions
$$ F ( x _ {k} \mid x _ {1} , s ^ {( x _ {1} ) } \dots x _ {k - 1 } , s ^ {( x _ {k-1} ) } ) ,\ x _ {i} \in X ,\ s ^ {( x _ {i} ) } \in S ^ {( x _ {i} ) } , $$
representing the law of motion of the controlled point, which is known to all players, is defined on $ X $. If $ x _ {k} $ is fixed, the function $ F $ is measurable with respect to all the remaining arguments. A sequence $ P $ of successive states and elementary situations $ x _ {1} , s ^ {( x _ {1} ) } \dots x _ {k} , s ^ {( x _ {k} ) } \dots $ is a play of a general dynamic game. It is inductively defined as follows: Let there be given a segment of the play (an opening) $ x _ {1} , s ^ {( x _ {1} ) } \dots x _ {k-1} $( $ k \geq 2 $), and let each player $ i $ choose his elementary strategy $ s _ {i} ^ {( x _ {k-1} ) } \in S _ {i} ^ {( x _ {k-1} ) } $ so that the elementary situation $ s ^ {( x _ {k-1} ) } $ arises; the game then continues, at random, in accordance with the distribution $ F ( \cdot \mid x _ {1} , s ^ {( x _ {1} ) } \dots x _ {k-1} , s ^ {( x _ {k-1}) } ) $, into the state $ x _ {k} $. In each play $ P $ the pay-off $ h _ {i} ( P) $ of player $ i $ is defined. If the set of all plays is denoted by $ \mathfrak P $, the dynamic game is specified by the system
$$ \Gamma = < I , X , \{ S _ {i} ^ {(x)} \} _ {i \in I , x \in X } , F , \{ h _ {i} ( P) \} _ {i \in I , P \in \mathfrak P } > . $$
In a dynamic game it is usually assumed that, at the successive moments of selection of an elementary strategy, the players know the preceding opening. In such a case a pure strategy $ s _ {i} $ of player $ i $ is a selection of functions $ s _ {i} ^ {( x) } ( x _ {1} , s ^ {( x _ {1} ) } \dots s ^ {( x _ {k-1} ) } , x ) $ which put the opening ending in $ x $ into correspondence with the elementary strategy $ s _ {i} ^ {(x)} \in S _ {i} ^ {(x)} $. Dynamic games in which the preceding opening is only known partly to the players — e.g. games with "information lag" — have also been studied.
For a game to be specified, each situation $ s = \{ s _ {i} \} $ must induce a probability measure $ \mu _ {s} $ on the set of all plays, and the mathematical expectation $ {\mathsf E} h _ {i} ( P) $ with respect to the measure $ \mu _ {s} $ must exist. This mathematical expectation is also the pay-off of player $ i $ in situation $ s $.
In general, the functions $ h _ {i} ( P) $ are arbitrary, but the most frequently studied dynamic games are those with terminal pay-off (the game is terminated as soon as $ x _ {k} $ appears in a terminal set $ X ^ {T} \subset X $, and $ h _ {i} ( P) = h _ {i} ( x _ {k} ) $ where $ x _ {k} $ is the last situation in the game), and those with integral pay-off ( $ h _ {i} ( P) = \sum _ {k= 1 } ^ \infty h _ {i} ( x _ {k} , s ^ {( x _ {k} ) }) $).
Dynamic games are regarded as the game-like variant of a problem of optimal control with discrete time. It is in fact reduced to such a problem if the number of players is one. If, in a dynamic game, $ X \subset \mathbf R ^ {n} $, continuous time is substituted for discrete time and the random factors are eliminated, a differential game is obtained, which may thus be regarded as a variant of a dynamic game (see also Differential games).
Stochastic games, recursive games and survival games are special classes of dynamic games (cf. also Stochastic game; Recursive game; Game of survival).
References
[1] | N.N. Vorob'ev, "The present state of the theory of games" Russian Math. Surveys , 25 : 2 (1970) pp. 77–136 Uspekhi Mat. Nauk , 25 : 2 (1970) pp. 81–140 |
Dynamic game. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Dynamic_game&oldid=18028