Continuous game
   HOME

TheInfoList



OR:

A continuous game is a mathematical concept, used in game theory, that generalizes the idea of an ordinary game like tic-tac-toe (noughts and crosses) or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be
uncountably infinite In mathematics, an uncountable set (or uncountably infinite set) is an infinite set that contains too many elements to be countable. The uncountability of a set is closely related to its cardinal number: a set is uncountable if its cardinal num ...
. In general, a game with uncountably infinite strategy sets will not necessarily have a Nash equilibrium solution. If, however, the strategy sets are required to be
compact Compact as used in politics may refer broadly to a pact or treaty; in more specific cases it may refer to: * Interstate compact * Blood compact, an ancient ritual of the Philippines * Compact government, a type of colonial rule utilized in British ...
and the utility functions
continuous Continuity or continuous may refer to: Mathematics * Continuity (mathematics), the opposing concept to discreteness; common examples include ** Continuous probability distribution or random variable in probability and statistics ** Continuous ...
, then a Nash equilibrium will be guaranteed; this is by Glicksberg's generalization of the
Kakutani fixed point theorem In mathematical analysis, the Kakutani fixed-point theorem is a fixed-point theorem for set-valued functions. It provides sufficient conditions for a set-valued function defined on a convex, compact subset of a Euclidean space to have a fixed poi ...
. The class of continuous games is for this reason usually defined and studied as a subset of the larger class of infinite games (i.e. games with infinite strategy sets) in which the strategy sets are compact and the utility functions continuous.


Formal definition

Define the ''n''-player continuous game G = (P, \mathbf, \mathbf) where ::P = is the set of n\, players, :: \mathbf= (C_1, C_2, \ldots, C_n) where each C_i\, is a
compact set In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space by making precise the idea of a space having no "punctures" or "missing endpoints", i. ...
, in a
metric space In mathematics, a metric space is a set together with a notion of '' distance'' between its elements, usually called points. The distance is measured by a function called a metric or distance function. Metric spaces are the most general set ...
, corresponding to the i\, ''th'' player's set of pure strategies, :: \mathbf= (u_1, u_2, \ldots, u_n) where u_i:\mathbf\to \R is the utility function of player i\, : We define \Delta_i\, to be the set of Borel probability measures on C_i\, , giving us the mixed strategy space of player ''i''. : Define the strategy profile \boldsymbol = (\sigma_1, \sigma_2, \ldots, \sigma_n) where \sigma_i \in \Delta_i\, Let \boldsymbol_ be a strategy profile of all players except for player i. As with discrete games, we can define a
best response In game theory, the best response is the strategy (or strategies) which produces the most favorable outcome for a player, taking other players' strategies as given (; ). The concept of a best response is central to John Nash's best-known contribu ...
correspondence for player i\, , b_i\ . b_i\, is a relation from the set of all probability distributions over opponent player profiles to a set of player i's strategies, such that each element of :b_i(\sigma_)\, is a best response to \sigma_. Define :\mathbf(\boldsymbol) = b_1(\sigma_) \times b_2(\sigma_) \times \cdots \times b_n(\sigma_). A strategy profile \boldsymbol* is a Nash equilibrium if and only if \boldsymbol* \in \mathbf(\boldsymbol*) The existence of a Nash equilibrium for any continuous game with continuous utility functions can be proven using
Irving Glicksberg Irving may refer to: People *Irving (name), including a list of people with the name Fictional characters * Irving, the main character's love interest in Cathy (comic strip) * Lloyd Irving, the main protagonist in the ''Tales of Symphonia'' vide ...
's generalization of the
Kakutani fixed point theorem In mathematical analysis, the Kakutani fixed-point theorem is a fixed-point theorem for set-valued functions. It provides sufficient conditions for a set-valued function defined on a convex, compact subset of a Euclidean space to have a fixed poi ...
. In general, there may not be a solution if we allow strategy spaces, C_i\, 's which are not compact, or if we allow non-continuous utility functions.


Separable games

A separable game is a continuous game where, for any i, the utility function u_i:\mathbf\to \R can be expressed in the sum-of-products form: : u_i(\mathbf) = \sum_^ \ldots \sum_^ a_ f_1(s_1)\ldots f_n(s_n), where \mathbf \in \mathbf, s_i \in C_i, a_ \in \R, and the functions f_:C_i \to \R are continuous. A polynomial game is a separable game where each C_i\, is a compact interval on \R\, and each utility function can be written as a multivariate polynomial. In general, mixed Nash equilibria of separable games are easier to compute than non-separable games as implied by the following theorem: :For any separable game there exists at least one Nash equilibrium where player ''i'' mixes at most m_i+1\, pure strategies. Whereas an equilibrium strategy for a non-separable game may require an
uncountably infinite In mathematics, an uncountable set (or uncountably infinite set) is an infinite set that contains too many elements to be countable. The uncountability of a set is closely related to its cardinal number: a set is uncountable if its cardinal num ...
support, a separable game is guaranteed to have at least one Nash equilibrium with finitely supported mixed strategies.


Examples


Separable games


A polynomial game

Consider a zero-sum 2-player game between players X and Y, with C_X = C_Y = \left ,1 \right . Denote elements of C_X\, and C_Y\, as x\, and y\, respectively. Define the utility functions H(x,y) = u_x(x,y) = -u_y(x,y)\, where :H(x,y)=(x-y)^2\, . The pure strategy best response relations are: :b_X(y) = \begin 1, & \mboxy \in \left ,1/2 \right ) \\ 0\text1, & \mboxy = 1/2 \\ 0, & \mbox y \in \left (1/2,1 \right \end :b_Y(x) = x\, b_X(y)\, and b_Y(x)\, do not intersect, so there is no pure strategy Nash equilibrium. However, there should be a mixed strategy equilibrium. To find it, express the expected value, v = \mathbb (x,y)/math> as a
linear Linearity is the property of a mathematical relationship ('' function'') that can be graphically represented as a straight line. Linearity is closely related to '' proportionality''. Examples in physics include rectilinear motion, the linear ...
combination of the first and second moments of the probability distributions of X and Y: : v = \mu_ - 2\mu_ \mu_ + \mu_\, (where \mu_ = \mathbb ^N/math> and similarly for ''Y''). The constraints on \mu_\, and \mu_ (with similar constraints for ''y'',) are given by Hausdorff as: : \begin \mu_ \ge \mu_ \\ \mu_^2 \le \mu_ \end \qquad \begin \mu_ \ge \mu_ \\ \mu_^2 \le \mu_ \end Each pair of constraints defines a compact convex subset in the plane. Since v\, is linear, any extrema with respect to a player's first two moments will lie on the boundary of this subset. Player i's equilibrium strategy will lie on : \mu_ = \mu_ \text \mu_^2 = \mu_ Note that the first equation only permits mixtures of 0 and 1 whereas the second equation only permits pure strategies. Moreover, if the best response at a certain point to player i lies on \mu_ = \mu_\, , it will lie on the whole line, so that both 0 and 1 are a best response. b_Y(\mu_,\mu_)\, simply gives the pure strategy y = \mu_\, , so b_Y\, will never give both 0 and 1. However b_x\, gives both 0 and 1 when y = 1/2. A Nash equilibrium exists when: : (\mu_*, \mu_*, \mu_*, \mu_*) = (1/2, 1/2, 1/2, 1/4)\, This determines one unique equilibrium where Player X plays a random mixture of 0 for 1/2 of the time and 1 the other 1/2 of the time. Player Y plays the pure strategy of 1/2. The value of the game is 1/4.


Non-Separable Games


A rational pay-off function

Consider a zero-sum 2-player game between players X and Y, with C_X = C_Y = \left ,1 \right . Denote elements of C_X\, and C_Y\, as x\, and y\, respectively. Define the utility functions H(x,y) = u_x(x,y) = -u_y(x,y)\, where :H(x,y)=\frac. This game has no pure strategy Nash equilibrium. It can be shown that a unique mixed strategy Nash equilibrium exists with the following pair of
probability density function In probability theory, a probability density function (PDF), or density of a continuous random variable, is a function whose value at any given sample (or point) in the sample space (the set of possible values taken by the random variable) ca ...
s: : f^*(x) = \frac \qquad g^*(y) = \frac. The value of the game is 4/\pi.


Requiring a Cantor distribution

Consider a zero-sum 2-player game between players X and Y, with C_X = C_Y = \left ,1 \right . Denote elements of C_X\, and C_Y\, as x\, and y\, respectively. Define the utility functions H(x,y) = u_x(x,y) = -u_y(x,y)\, where :H(x,y)=\sum_^\infty \frac\left(2x^n-\left (\left(1-\frac \right )^n-\left (\frac\right)^n \right ) \right ) \left(2y^n - \left (\left(1-\frac \right )^n-\left (\frac\right)^n \right ) \right ). This game has a unique mixed strategy equilibrium where each player plays a mixed strategy with the Cantor singular function as the cumulative distribution function.Gross, O. (1952). "A rational payoff characterization of the Cantor distribution." Technical Report D-1349, The RAND Corporation.


Further reading

* H. W. Kuhn and A. W. Tucker, eds. (1950). ''Contributions to the Theory of Games: Vol. II.'' Annals of Mathematics Studies 28. Princeton University Press. {{ISBN, 0-691-07935-8.


See also

* Graph continuous


References

Game theory game classes