# Game on the unit square

A two-person zero-sum game in which the set of pure strategies of the players $\textrm{ I }$ and $\textrm{ II }$ is the segment $[0,\ 1]$. With a suitable normalization, any two-person zero-sum game for which the set of strategies for each player is a continuum can be reduced to a game on the unit square. Games on the unit square are given by pay-off functions $K (x,\ y)$ defined on the unit square. Mixed strategies for the players are distribution functions on the unit interval. If the pay-off function is bounded and measurable in both arguments, the pay-off of player $\textrm{ I }$ when players $\textrm{ I }$ and $\textrm{ II }$ apply mixed strategies $F$ and $G$, respectively, is, by definition,

$$K (F,\ G) \ = \ \int\limits _ { 0 } ^ { 1 } \int\limits _ { 0 } ^ { 1 } K (x,\ y) \ dF (x) \ dG (y).$$

If $K (x,\ y)$ is continuous in both arguments, then

$$\max _ { F } \ \mathop{\rm min} _ { G } \ K (F,\ G) \ = \ \mathop{\rm min} _ { G } \ \max _ { F } \ K (F,\ G) \ = \ v,$$

that is, for such a game the minimax principle is valid and there exist a value for the game, denoted by $\nu$, and optimal strategies for both players. Theorems on the existence of values of games (minimax theorems) have been proved under weaker assumptions on the pay-off functions. It follows from general minimax theorems, for example, that there exists a value for games on the unit square with bounded pay-off functions that are upper semi-continuous in $x$ or lower semi-continuous in $y$. Existence theorems have been proved for the value of a game in certain special classes of discontinuous pay-off functions (for example, for a game of timing, cf. Game involving the choice of the moment of time). However, not all games on the unit square have values. Thus, for the function $K (x,\ y)$ defined by

$$K (x,\ y) \ = \ \left \{ \begin{array}{r} -1, \\ 0, \\ 1, \end{array} \ \ \begin{array}{l} x < y \neq 1 \\ x = y; \\ y < x \neq 1 \end{array} \ \begin{array}{l} \textrm{ and } \\ {} \\ \textrm{ and } \end{array} \ \begin{array}{l} x = 1,\ y \neq 1; \\ {} \\ y = 1,\ x \neq 1; \end{array} \right .$$

the following equality holds:

$$\sup _ { F } \ \inf _ { G } \ K (F,\ G) \ = \ - 1,\ \ \inf _ { G } \ \sup _ { F } \ K (F,\ G) \ = \ 1.$$

The structure of the set of games with a unique solution (cf. Solution in game theory) has been determined for games on the unit square with continuous pay-off functions. Indeed, the set of continuous functions in two variables for which the corresponding game on the unit square has a unique solution, for which the optimal strategies of both players are continuous and their supports (see Support of a measure) are nowhere-dense perfect closed sets of Lebesgue measure zero contains an everywhere-dense subset of type $G _ \delta$.

There are no general methods for solving games on the unit square. Nevertheless, for some classes of games on the unit square it is possible either to find a solution analytically (for example, for games of timing, games with pay-off functions depending only on the difference of the strategies of the two players and having optimal equalizing strategies), or else to prove the existence for such games of optimal strategies with finite support (for example, for a convex game; a degenerate game; or a bell-shaped game) and thus to obtain the possibility of reducing the problem of finding a solution for a game on the unit square to the solution of some matrix game. Approximate methods may be applied to solve games with continuous pay-off functions.

#### References

 [1] S. Karlin, "Mathematical methods and theory in games, programming and economics" , Addison-Wesley (1959)