<Li> If A betrays B but B remains silent, A will be set free and B will serve 3 years in prison (and vice versa) </Li> <Li> If A and B both remain silent, both of them will only serve 1 year in prison (on the lesser charge) </Li> <P> It is implied that the prisoners will have no opportunity to reward or punish their partner other than the prison sentences they get, and that their decision will not affect their reputation in the future . Because betraying a partner offers a greater reward than cooperating with them, all purely rational self - interested prisoners would betray the other, and so the only possible outcome for two purely rational prisoners is for them to betray each other . The interesting part of this result is that pursuing individual reward logically leads both of the prisoners to betray, when they would get a better reward if they both kept silent . In reality, humans display a systemic bias towards cooperative behavior in this and similar games, much more so than predicted by simple models of "rational" self - interested action . A model based on a different kind of rationality, where people forecast how the game would be played if they formed coalitions and then maximized their forecasts, has been shown to make better predictions of the rate of cooperation in this and similar games, given only the payoffs of the game . </P> <P> An extended "iterated" version of the game also exists, where the classic game is played repeatedly between the same prisoners, and consequently, both prisoners continuously have an opportunity to penalize the other for previous decisions . If the number of times the game will be played is known to the players, then (by backward induction) two classically rational players will betray each other repeatedly, for the same reasons as the single - shot variant . In an infinite or unknown length game there is no fixed optimum strategy, and Prisoner's Dilemma tournaments have been held to compete and test algorithms . </P>

The term prisoner's' dilemma refers to a game in which