Setup and definitions
We start with a ''basic game'', also known as the ''stage game'', which is an ''n-''player game. In this game, each player has finitely many actions to choose from, and they make their choices simultaneously and without knowledge of the other player's choices. The collective choices of the players leads to a ''payoff profile,'' i.e. to a payoff for each of the players. The mapping from collective choices to payoff profiles is known to the players, and each player aims to maximize their payoff. If the collective choice is denoted by ''x,'' the payoff that player ''i'' receives, also known as player ''i''Infinitely-repeated games without discounting
In the undiscounted model, the players are patient. They do not differentiate between utilities in different time periods. Hence, their utility in the repeated game is represented by the sum of utilities in the basic games. When the game is infinite, a common model for the utility in the infinitely-repeated game is the limit inferior of mean utility: If the game results in a path of outcomes , where denotes the collective choices of the players at iteration ''t'' (''t=0,1,2,...),'' player ''i'' utility is defined as :: where is the basic-game utility function of player ''i''. An infinitely-repeated game without discounting is often called a "supergame". The folk theorem in this case is very simple and contains no pre-conditions: every individually rational and feasible payoff profile in the basic game is a Nash equilibrium payoff profile in the repeated game. The proof employs what is called a ''grim'' or '' grim trigger'' strategy. All players start by playing the prescribed action and continue to do so until someone deviates. If player ''i'' deviates, all other players switch to picking the action which minmaxes player ''i'' forever after. The one-stage gain from deviation contributes 0 to the total utility of player ''i''. The utility of a deviating player cannot be higher than his minmax payoff. Hence all players stay on the intended path and this is indeed a Nash equilibrium.Subgame perfection
The above Nash equilibrium is not always subgame perfect. If punishment is costly for the punishers, the threat of punishment is not credible. A subgame perfect equilibrium requires a slightly more complicated strategy. The punishment should not last forever; it should last only a finite time which is sufficient to wipe out the gains from deviation. After that, the other players should return to the equilibrium path. The limit-of-means criterion ensures that any finite-time punishment has no effect on the final outcome. Hence, limited-time punishment is a subgame-perfect equilibrium. * Coalition subgame-perfect equilibria: An equilibrium is called a ''coalition Nash equilibrium'' if no coalition can gain from deviating. It is called a ''coalition subgame-perfect equilibrium'' if no coalition can gain from deviating after any history. With the limit-of-means criterion, a payoff profile is attainable in coalition-Nash-equilibrium or in coalition-subgame-perfect-equilibrium, if-and-only-if it is Pareto efficient and weakly-coalition-individually-rational.For every nonempty coalition , there is a strategy of the other players () such that for any strategy played by , the payoff when plays is not B">trictly better for ''all'' members ofOvertaking
Some authors claim that the limit-of-means criterion is unrealistic, because it implies that utilities in any finite time-span contribute 0 to the total utility. However, if the utilities in any finite time-span contribute a positive value, and the value is undiscounted, then it is impossible to attribute a finite numeric utility to an infinite outcome sequence. A possible solution to this problem is that, instead of defining a numeric utility for each infinite outcome sequence, we just define the preference relation between two infinite sequences. We say that agent (strictly) prefers the sequence of outcomes over the sequence , if: :: For example, consider the sequences and . According to the limit-of-means criterion, they provide the same utility to player ''i,'' but according to the overtaking criterion, is better than for player ''i''. See overtaking criterion for more information. The folk theorems with the overtaking criterion are slightly weaker than with the limit-of-means criterion. Only outcomes that are ''strictly'' individually rational, can be attained in Nash equilibrium. This is because, if an agent deviates, he gains in the short run, and this gain can be wiped out only if the punishment gives the deviator strictly less utility than the agreement path. The following folk theorems are known for the overtaking criterion: * Strict stationary equilibria: A Nash equilibrium is called ''strict'' if each player strictly prefers the infinite sequence of outcomes attained in equilibrium, over any other sequence he can deviate to. A Nash equilibrium is called ''stationary'' if the outcome is the same in each time-period. An outcome is attainable in strict-stationary-equilibrium if-and-only-if for every player the outcome is strictly better than the player's minimax outcome. * Strict stationary subgame-perfect equilibria: An outcome is attainable in strict-stationary-subgame-perfect-equilibrium, if for every player the outcome is strictly better than the player's minimax outcome (note that this is not an "if-and-only-if" result). To achieve subgame-perfect equilibrium with the overtaking criterion, it is required to punish not only the player that deviates from the agreement path, but also every player that does not cooperate in punishing the deviant. ** The "stationary equilibrium" concept can be generalized to a "periodic equilibrium", in which a finite number of outcomes is repeated periodically, and the payoff in a period is the arithmetic mean of the payoffs in the outcomes. That mean payoff should be strictly above the minimax payoff. * Strict stationary coalition equilibria: With the overtaking criterion, if an outcome is attainable in coalition-Nash-equilibrium, then it is Pareto efficient and weakly-coalition-individually-rational. On the other hand, if it is Pareto efficient and strongly-coalition-individually-rational for every nonempty coalition , there is a strategy of the other players () such that for any strategy played by , the payoff is strictly worse for ''at least one'' member of . it can be attained in strict-stationary-coalition-equilibrium.Infinitely-repeated games with discounting
Assume that the payoff of a player in an infinitely repeated game is given by the ''average discounted criterion'' with discount factor 0 < ''δ'' < 1: : The discount factor indicates how patient the players are. The factor is introduced so that the payoff remain bounded when . The folk theorem in this case requires that the payoff profile in the repeated game strictly dominates the minmax payoff profile (i.e., each player receives strictly more than the minmax payoff). Let ''a'' be a strategy profile of the stage game with payoff profile ''u'' which strictly dominates the minmax payoff profile. One can define a Nash equilibrium of the game with ''u'' as resulting payoff profile as follows: :1. All players start by playing ''a'' and continue to play ''a'' if no deviation occurs. :2. If any one player, say player ''i'', deviated, play the strategy profile ''m'' which minmaxes ''i'' forever after. :3. Ignore multilateral deviations. If player ''i'' gets ''ε'' more than his minmax payoff each stage by following 1, then the potential loss from punishment is : If ''δ'' is close to 1, this outweighs any finite one-stage gain, making the strategy a Nash equilibrium. An alternative statement of this folk theorem allows the equilibrium payoff profile ''u'' to be any individually rational feasible payoff profile; it only requires there exist an individually rational feasible payoff profile that strictly dominates the minmax payoff profile. Then, the folk theorem guarantees that it is possible to approach ''u'' in equilibrium to any desired precision (for every ''ε'' there exists a Nash equilibrium where the payoff profile is a distance ''ε'' away from ''u'').Subgame perfection
Attaining a subgame perfect equilibrium in discounted games is more difficult than in undiscounted games. The cost of punishment does not vanish (as with the limit-of-means criterion). It is not always possible to punish the non-punishers endlessly (as with the overtaking criterion) since the discount factor makes punishments far away in the future irrelevant for the present. Hence, a different approach is needed: the punishers should be rewarded. This requires an additional assumption, that the set of feasible payoff profiles is full dimensional and the min-max profile lies in its interior. The strategy is as follows. :1. All players start by playing ''a'' and continue to play ''a'' if no deviation occurs. :2. If any one player, say player ''i'', deviated, play the strategy profile ''m'' which minmaxes ''i'' for ''N'' periods. (Choose ''N'' and ''δ'' large enough so that no player has incentive to deviate from phase 1.) :3. If no players deviated from phase 2, all player ''j'' ≠ ''i'' gets rewarded ''ε'' above ''j'' min-max forever after, while player ''i'' continues receiving his min-max. (Full-dimensionality and the interior assumption is needed here.) :4. If player ''j'' deviated from phase 2, all players restart phase 2 with ''j'' as target. :5. Ignore multilateral deviations. Player ''j'' ≠ ''i'' now has no incentive to deviate from the punishment phase 2. This proves the subgame perfect folk theorem.Finitely-repeated games without discount
Assume that the payoff of player ''i'' in a game that is repeated ''T'' times is given by a simple arithmetic mean: : A folk theorem for this case has the following additional requirement: ::In the basic game, for every player ''i'', there is a Nash-equilibrium that is strictly better, for ''i'', than his minmax payoff. This requirement is stronger than the requirement for discounted infinite games, which is in turn stronger than the requirement for undiscounted infinite games. This requirement is needed because of the last step. In the last step, the only stable outcome is a Nash-equilibrium in the basic game. Suppose a player ''i'' gains nothing from the Nash equilibrium (since it gives him only his minmax payoff). Then, there is no way to punish that player. On the other hand, if for every player there is a basic equilibrium which is strictly better than minmax, a repeated-game equilibrium can be constructed in two phases: # In the first phase, the players alternate strategies in the required frequencies to approximate the desired payoff profile. # In the last phase, the players play the preferred equilibrium of each of the players in turn. In the last phase, no player deviates since the actions are already a basic-game equilibrium. If an agent deviates in the first phase, he can be punished by minmaxing him in the last phase. If the game is sufficiently long, the effect of the last phase is negligible, so the equilibrium payoff approaches the desired profile.Applications
Folk theorems can be applied to a diverse number of fields. For example: *Summary of folk theorems
The following table compares various folk theorems in several aspects: * Horizon – whether the stage game is repeated finitely or infinitely many times. * Utilities – how the utility of a player in the repeated game is determined from the player's utilities in the stage game iterations. * Conditions on ''G'' (the stage game) – whether there are any technical conditions that should hold in the one-shot game in order for the theorem to work. * Conditions on ''x'' (the target payoff vector of the repeated game) – whether the theorem works for any individually rational and feasible payoff vector, or only on a subset of these vectors. * Equilibrium type – if all conditions are met, what kind of equilibrium is guaranteed by the theorem – Nash or Subgame-perfect? * Punishment type – what kind of punishment strategy is used to deter players from deviating?Folk theorems in other settings
In allusion to the folk theorems for repeated games, some authors have used the term "folk theorem" to refer to results on the set of possible equilibria or equilibrium payoffs in other settings, especially if the results are similar in what equilibrium payoffs they allow. For instance, Tennenholtz proves a "folk theorem" for program equilibrium. Many other folk theorems have been proved in settings with commitment.Notes
References
* * * * A set of introductory notes to the Folk Theorem. {{Game theory Game theory equilibrium concepts Theorems