Namespaces
Variants
Actions

Stochastic game

From Encyclopedia of Mathematics
Jump to: navigation, search


A dynamic game in which the transition distribution function does not depend on the history of the game, i.e.

$$ F( x _ {k} \mid x _ {1} , s ^ {( x _ {1} ) } \dots x _ {k-1} , s ^ {( x _ {k-1} ) } ) = F( x _ {k} \mid x _ {k-1} , s ^ {( x _ {k-1} ) } ). $$

Stochastic games were first defined by L.S. Shapley [1], who studied two-person zero-sum stochastic games with real pay-off (Shapley games). In Shapley games, both the set $ X $ of states of the game and the sets of pure strategies of the players are finite, and at any step and for any choice of alternatives made by the players there is a non-zero probability of terminating the game. As a result of this condition, the game terminates with probability 1 in a finite number of steps, and the mathematical expectation of each player's pay-off is finite. Any such game has a value, and both players have stationary optimal strategies, i.e. strategies in which the player's choice of the elementary strategy at every stage of the game depends only on the current situation. Shapley also discovered a procedure by which it is possible to find both the value of the game and the optimal strategies.

Stochastic games which differ from Shapley games in that they can be infinite have also been studied; they are called stochastic games with limiting mean pay-off, i.e. two-person zero-sum stochastic games with

$$ h _ {1} ( P) = - h _ {2} ( P) = \ \lim\limits _ {n \rightarrow \infty } \sup \sum_{k=1} ^ { n } \frac{1}{n} h _ {i} ( x _ {k} , s ^ {( x _ {k} ) } ). $$

The existence of the value of such a game and of stationary optimal strategies under hypotheses of ergodicity of the Markov chain that arises when any stationary strategies are substituted in the transition functions $ F( x _ {k} \mid x _ {k-1} , s ^ {( x _ {k-1} ) } ) $ have been proved. These results have been generalized to cases where restrictions on the number of states and elementary strategies have been removed and to the case of other forms of pay-off.

References

[1] L.S. Shapley, "Stochastic games" Proc. Nat. Acad. Sci. , 39 (1953) pp. 1095–1100
[2] D. Gillette, "Stochastic games with zero stop probabilities" , Contributions to the theory of games , 3 , Princeton Univ. Press (1957) pp. 179–187
[3] T. Parthasarathy, M. Stern, "Markov games: A survey" E. Roxin (ed.) P. Liers (ed.) R. Sternberg (ed.) , Differential Games and Control Theory , M. Dekker (1977) pp. 1–46
[4] K. Vrieze, "Zero-sum stochastic games" , Surveys in Game Theory and Related Topics , Tracts , 39 , CWI (1987) pp. 103–132
[5] V.K. Domanskii, Kibernetik , 1 (1988) pp. 26–49

Comments

In 1981, J.F. Mertens and A. Neyman proved the existence of the value for arbitrary stochastic games with limiting mean pay-off, [a2].

There has been a lot of research on the asymptotic theory of stochastic games, using, e.g., discounted pay-offs. See [a1][a3].

References

[a1] T. Bewley, E. Kohlberg, "The asymptotic theory of stochastic games" Math. Oper. Res. , 1 (1976) pp. 197–208
[a2] J.F. Mertens, A. Neyman, "Stochastic games have a value" Proc. Nat. Acad. Sci. , 79 (1982) pp. 2145–2146
[a3] T.E.S. Raghavan (ed.) T.S. Ferguson (ed.) T. Parthasarathy (ed.) O.J. Vrieze (ed.) , Stochastic games and related topics (in honour of L.S. Shapley) , Kluwer (1991)
How to Cite This Entry:
Stochastic game. Encyclopedia of Mathematics. URL: http://encyclopediaofmath.org/index.php?title=Stochastic_game&oldid=55123
This article was adapted from an original article by V.K. Domanskii (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098. See original article