HOME

TheInfoList



OR:

A continuous game is a mathematical concept, used in
game theory Game theory is the study of mathematical models of strategic interactions. It has applications in many fields of social science, and is used extensively in economics, logic, systems science and computer science. Initially, game theory addressed ...
, that generalizes the idea of an ordinary game like tic-tac-toe (noughts and crosses) or checkers (draughts). In other words, it extends the notion of a discrete game, where the players choose from a finite set of pure strategies. The continuous game concepts allows games to include more general sets of pure strategies, which may be uncountably infinite. In general, a game with uncountably infinite strategy sets will not necessarily have a
Nash equilibrium In game theory, the Nash equilibrium is the most commonly used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy (holding all other players' strategies fixed) ...
solution. If, however, the strategy sets are required to be
compact Compact as used in politics may refer broadly to a pact or treaty; in more specific cases it may refer to: * Interstate compact, a type of agreement used by U.S. states * Blood compact, an ancient ritual of the Philippines * Compact government, a t ...
and the utility functions continuous, then a Nash equilibrium will be guaranteed; this is by Glicksberg's generalization of the Kakutani fixed point theorem. The class of continuous games is for this reason usually defined and studied as a subset of the larger class of infinite games (i.e. games with infinite strategy sets) in which the strategy sets are compact and the utility functions continuous.


Formal definition

Define the ''n''-player continuous game G = (P, \mathbf, \mathbf) where ::P = is the set of n\, players, :: \mathbf= (C_1, C_2, \ldots, C_n) where each C_i\, is a
compact set In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space. The idea is that a compact space has no "punctures" or "missing endpoints", i.e., i ...
, in a
metric space In mathematics, a metric space is a Set (mathematics), set together with a notion of ''distance'' between its Element (mathematics), elements, usually called point (geometry), points. The distance is measured by a function (mathematics), functi ...
, corresponding to the i\, ''th'' player's set of pure strategies, :: \mathbf= (u_1, u_2, \ldots, u_n) where u_i:\mathbf\to \R is the utility function of player i\, : We define \Delta_i\, to be the set of Borel
probability measure In mathematics, a probability measure is a real-valued function defined on a set of events in a σ-algebra that satisfies Measure (mathematics), measure properties such as ''countable additivity''. The difference between a probability measure an ...
s on C_i\, , giving us the mixed strategy space of player ''i''. : Define the strategy profile \boldsymbol = (\sigma_1, \sigma_2, \ldots, \sigma_n) where \sigma_i \in \Delta_i\, Let \boldsymbol_ be a strategy profile of all players except for player i. As with discrete games, we can define a best response correspondence for player i\, , b_i\ . b_i\, is a relation from the set of all probability distributions over opponent player profiles to a set of player i's strategies, such that each element of :b_i(\sigma_)\, is a best response to \sigma_. Define :\mathbf(\boldsymbol) = b_1(\sigma_) \times b_2(\sigma_) \times \cdots \times b_n(\sigma_). A strategy profile \boldsymbol* is a
Nash equilibrium In game theory, the Nash equilibrium is the most commonly used solution concept for non-cooperative games. A Nash equilibrium is a situation where no player could gain by changing their own strategy (holding all other players' strategies fixed) ...
if and only if \boldsymbol* \in \mathbf(\boldsymbol*) The existence of a Nash equilibrium for any continuous game with continuous utility functions can be proven using Irving Glicksberg's generalization of the Kakutani fixed point theorem. In general, there may not be a solution if we allow strategy spaces, C_i\, 's which are not compact, or if we allow non-continuous utility functions.


Separable games

A separable game is a continuous game where, for any i, the utility function u_i:\mathbf\to \R can be expressed in the sum-of-products form: : u_i(\mathbf) = \sum_^ \ldots \sum_^ a_ f_1(s_1)\ldots f_n(s_n), where \mathbf \in \mathbf, s_i \in C_i, a_ \in \R, and the functions f_:C_i \to \R are continuous. A polynomial game is a separable game where each C_i\, is a compact interval on \R\, and each utility function can be written as a multivariate polynomial. In general, mixed Nash equilibria of separable games are easier to compute than non-separable games as implied by the following theorem: :For any separable game there exists at least one Nash equilibrium where player ''i'' mixes at most m_i+1\, pure strategies. Whereas an equilibrium strategy for a non-separable game may require an uncountably infinite support, a separable game is guaranteed to have at least one Nash equilibrium with finitely supported mixed strategies.


Examples


Separable games


A polynomial game

Consider a zero-sum 2-player game between players X and Y, with C_X = C_Y = \left ,1 \right . Denote elements of C_X\, and C_Y\, as x\, and y\, respectively. Define the utility functions H(x,y) = u_x(x,y) = -u_y(x,y)\, where :H(x,y)=(x-y)^2\, . The pure strategy best response relations are: :b_X(y) = \begin 1, & \mboxy \in \left ,1/2 \right ) \\ 0\text1, & \mboxy = 1/2 \\ 0, & \mbox y \in \left (1/2,1 \right \end :b_Y(x) = x\, b_X(y)\, and b_Y(x)\, do not intersect, so there is no pure strategy Nash equilibrium. However, there should be a mixed strategy equilibrium. To find it, express the expected value, v = \mathbb (x,y)/math> as a
linear In mathematics, the term ''linear'' is used in two distinct senses for two different properties: * linearity of a '' function'' (or '' mapping''); * linearity of a '' polynomial''. An example of a linear function is the function defined by f(x) ...
combination of the first and second moments of the probability distributions of X and Y: : v = \mu_ - 2\mu_ \mu_ + \mu_\, (where \mu_ = \mathbb ^N/math> and similarly for ''Y''). The constraints on \mu_\, and \mu_ (with similar constraints for ''y'',) are given by Hausdorff as: : \begin \mu_ \ge \mu_ \\ \mu_^2 \le \mu_ \end \qquad \begin \mu_ \ge \mu_ \\ \mu_^2 \le \mu_ \end Each pair of constraints defines a compact convex subset in the plane. Since v\, is linear, any extrema with respect to a player's first two moments will lie on the boundary of this subset. Player i's equilibrium strategy will lie on : \mu_ = \mu_ \text \mu_^2 = \mu_ Note that the first equation only permits mixtures of 0 and 1 whereas the second equation only permits pure strategies. Moreover, if the best response at a certain point to player i lies on \mu_ = \mu_\, , it will lie on the whole line, so that both 0 and 1 are a best response. b_Y(\mu_,\mu_)\, simply gives the pure strategy y = \mu_\, , so b_Y\, will never give both 0 and 1. However b_x\, gives both 0 and 1 when y = 1/2. A Nash equilibrium exists when: : (\mu_*, \mu_*, \mu_*, \mu_*) = (1/2, 1/2, 1/2, 1/4)\, This determines one unique equilibrium where Player X plays a random mixture of 0 for 1/2 of the time and 1 the other 1/2 of the time. Player Y plays the pure strategy of 1/2. The value of the game is 1/4.


Non-Separable Games


A rational payoff function

Consider a zero-sum 2-player game between players X and Y, with C_X = C_Y = \left ,1 \right . Denote elements of C_X\, and C_Y\, as x\, and y\, respectively. Define the utility functions H(x,y) = u_x(x,y) = -u_y(x,y)\, where :H(x,y)=\frac. This game has no pure strategy Nash equilibrium. It can be shown that a unique mixed strategy Nash equilibrium exists with the following pair of
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
s: : F^*(x) = \frac \arctan \qquad G^*(y) = \frac \arctan. Or, equivalently, the following pair of
probability density function In probability theory, a probability density function (PDF), density function, or density of an absolutely continuous random variable, is a Function (mathematics), function whose value at any given sample (or point) in the sample space (the s ...
s: : f^*(x) = \frac \qquad g^*(y) = \frac. The value of the game is 4/\pi.


Requiring a Cantor distribution

Consider a zero-sum 2-player game between players X and Y, with C_X = C_Y = \left ,1 \right . Denote elements of C_X\, and C_Y\, as x\, and y\, respectively. Define the utility functions H(x,y) = u_x(x,y) = -u_y(x,y)\, where :H(x,y)=\sum_^\infty \frac\left(2x^n-\left (\left(1-\frac \right )^n-\left (\frac\right)^n \right ) \right ) \left(2y^n - \left (\left(1-\frac \right )^n-\left (\frac\right)^n \right ) \right ). This game has a unique mixed strategy equilibrium where each player plays a mixed strategy with the Cantor singular function as the
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
.Gross, O. (1952). "A rational payoff characterization of the Cantor distribution." Technical Report D-1349, The RAND Corporation.


Further reading

* H. W. Kuhn and A. W. Tucker, eds. (1950). ''Contributions to the Theory of Games: Vol. II.'' Annals of Mathematics Studies 28. Princeton University Press. {{ISBN, 0-691-07935-8.


See also

* Graph continuous


References

Game theory game classes