“The Evolution of Cooperation” by Robert Axelrod
Posted 10 Jan 2011 / 0I just finished reading Robert Axelrod’s seminal book entitled The Evolution of Cooperation. Although I had read a lot about Axelrod’s work and am quite familiar with the body of literature that it inspired, I had never actually read his book cover to cover. Going in, my expectation was of finding a rather primitive treatment of the subject: after all, this was one of the earliest works to consider the evolutionary dynamics of cooperative behavior. Besides being a really brilliant piece of theoretical investigation, what is striking about the book is how meaningful its findings are over twenty-five years after they were published.
The book begins with an introductory chapter that lays out the basic paradox embodied in the cooperative behavior of organisms. Explaining the Prisoner’s Dilemma (PD) as the conceptual representation of the conflict faced by interacting organisms with the potential to either cooperate or defect, Axelrod explains the central goal of the book: to determine under what conditions reciprocal cooperation can emerge.
The second section describes Axelrod’s really sophisticated exploration of the PD using round-robin tournaments orchestrated by computer programming. Realizing that he could not anticipate all possible strategies for playing the PD, Axelrod solicited submissions of strategies from experts in the field of game theory (and later a larger group including computer hobbyists) to populate the game with virtual players. A key feature of his tournament was that interactions occurred repeatedly and between all players. Submitted strategies were empowered to consider the history of each interaction and to be as complex as the programming skill of each player allowed. This led to a host of different strategies, all with the potential to be responsive to how the other player had played previous rounds; allowing behavior to be contingent on cues from the local environment proved to be the critical component of the tournament rules.
The result of Axelrod’s tournament was intriguingly simple: the best rule to play is TIT-FOR-TAT (TFT), which he describes as 1) nice; 2) not envious; 3) reciprocal; 4) forgiving; 5) not too clever. The approach employed by TFT is to cooperate first and continue to cooperate until the opposing player defects. Upon defection, TFT defects on the next round and then resumes cooperating only when and if the opposing player again cooperates. The rule is nice in that it never defects without provocation. It is not envious in that it is destined to lose or tie in each of its series with individual players, but uses this humility to generate the greatest average score. It is reciprocal in that it plays on the next round whatever its opponent plays on the present round: cross TFT and it will cross you, but play nice and it will never try to cross you. It is forgiving in that it will resume cooperation with any player that displays contrition by cooperating (which not coincidentally means that in order to entice TFT to resume cooperation, an opponent much ‘pay back’ TFT for the injury caused by a previous defection). Finally, TFT is not too clever in that it uses a very simple algorithm to achieve its conditional behavior pattern.
In subsequent tournaments that modified some of the original rules, TIT-FOR-TAT proved to be amazingly robust. It triumphed over a great variety of strategies, including many which had been written with full knowledge of TFT’s behavioral rules and therefore should have had the best prospects of defeating it. When run in what Axelrod calls an “ecological” version of the tournament that allowed successful strategies to multiply and unsuccessful strategies to perish, TFT proved robust to the challenge of other strategies. In addition, TFT could invade large populations of alternative strategies — including those which always defect — so long as the TFT player entered the game with a sufficient number of other TFT players. Pretty much the only thing that could unseat TFT was a fundamental change in the rules: if the number of rounds played or just the probability of future interaction was sufficiently low, TFT could be defeated by rules that were not nice and employed unprovoked defections.
In subsequent chapters Axelrod extends his model into a geographical context, allowing players to compete in local neighborhoods on a grid. Paired with the “ecological” process of winners replacing losers, this spatially-explicit version of the PD results in much more rich dynamics. Once again TFT does well, but in Axelrod’s simulations so too do other nice strategies. As it turns out, the main reason that TFT is superior in the well-mixed environment of the round-robin tournament is that it does reasonably well against all opponents. But in the spatially-explicit version of the iterated PD, nice players can survive in local pockets even when they are highly-vulnerable to exploitation. Here the presence of a strategy like TFT is critical: so long as there are enough TFT players to lower the overall success of exploitative strategies, the most exploitative strategies soon die out. Once these strategies do die out, TFT plays nice with the rest of the nice strategies, leaving a diversity of nice strategies in stable coexistence.
Axelrod does a wonderful job of pointing out the key concepts illuminated by his investigation of the Prisoner’s Dilemma. Probably most important is that social environment dictates who wins PD tournaments of this nature. Traditional approaches to game theory emphasize the relative competitiveness of each strategy, leading to the prediction that the strategy that comes out on top most frequently when playing against alternative strategies should be the winner. But TFT never wins and generally only ties when playing against a strategy that is as nice or nicer than itself. It may lose more often than it wins, but it amasses a greater overall score for each game it plays. While exploitative strategies may win by a slight margin against TFT, they score abysmally when paired against other exploitative strategies. Strategies that are too nice do fine when paired with TFT, but are subject to severe losses when exposed to more exploitative strategies. Only the middle ground occupied by TFT — which is nice but also firm — wins in the round-robin format of the tournament.
Axelrod’s findings have profound importance for evolutionary biology. Primarily they have been cited as strong theoretical support for the idea of reciprocal altruism (Trivers 1974). I believe this is appropriate, but there is far more to learn from Axelrod’s theoretical explorations. I think that the least-appreciated facet of what Axelrod shows is the critical importance of social environment. In these sorts of games there is no absolute prediction of what the “optimal” strategy will be, as the outcome of interactions depends completely on the environment created by the other players. Which strategy becomes fixed in the local population will depend on the composition of the initial population, which means that each local population may have a different long-term outcome. In some cases there exists a social process that creates what one might call frequency dependent selection, wherein there is not a single strategy that wins absolutely and a balance of different strategies results. This suggests that multiple behavioral strategies might coexist, an idea recently labeled social heterosis (Nonacs and Kapheim 2007, Nonacs and Kapheim 2008).
The discovery that TIT-FOR-TAT is a highly-robust strategy suggests that many organisms could display behaviors that maximize social cooperation. If the winning strategy was highly complex, we might conclude that only animals with sophisticated cognition would be capable of avoiding exploitation and fostering population-wide cooperation. But TFT is simple, requiring only basic recognition of other players and a memory of the last interaction with that player. This simplicity suggests that a diversity of organisms might be capable of playing a TFT-like strategy, diversifying the realms across which cooperation might operate.
I am deeply intrigued by the concept of multilevel selection, in particular how it relates to the stability of human cultures. Axelrod tackles the issue in relation to how hierarchical structures are designed to maximize cooperation at each level of organization:
Hierarchy and organization are especially effective at concentrating the interactions between specific individuals. A bureaucracy is structured so that people specialize, and so that people working on related tasks are grouped together. This organizational practice increases the frequency of interactions, making it easier for workers to develop stable cooperative relationships. Moreover, when an issue requires coordination between different branches of the organization, the hierarchical structure allows the issue to be referred to policy makers at higher levels who frequently deal with each other or just such issues. By binding people together in a long-term, multilevel game, organizations increase the number and importance of future interactions, and thereby promote the emergence of cooperation among groups too large to interact individually. This in turn leads to the evolution of organizations for the handling of larger and more complex issues. (p. 130-131)
Several facets of this passage are interesting to me. First, the passage presages the idea of multilevel selection, that selection for cooperation may be operating on multiple levels. In this sense, Axelrod seems to be predicting an important development in the field studying cooperation. On the other hand, the passage seems really naive and/or overly optimistic, because it fails to recognize that there may be conflicts in the interests of the different levels, and that the chief purpose of the constraints imposed by the upper levels of a hierarchical structure may be coerce cooperation out of lower levels of organization. Axelrod certainly does not imply that cooperation is always beneficial in other sections of the book, but his assessment of hierarchy misses the more sinister nature of multilevel selection in this example. There is a lot of work still to be done to understand the many levels of organization present in human societies and how these relate to our exceptional level of large-scale cooperation.
There is also an interesting concept that emerges in the concluding chapter of the book, one that was developed earlier but is only fully illuminated at the end. This concept is that of “collective stability” of different strategies for playing the PD. Mentioned almost in passing, I believe that this is a very different approach to viewing evolution. As Axelrod puts it:
The power of the collective stability approach is that it allows a consideration of all possible new strategies, whether minor variants of the common strategy or completely new ideas. The limitation of the stability approach is that it only tells what will get established in the first place. Since many different strategies can be collectively stable once established in a population, it is important to know which strategies are likely to get established in the first place. (p. 170-171)
Although the concept of an “evolutionarily stable strategy” (ESS) has been around for awhile, Axelrod’s concept cuts much more deeply than the narrow ESS concept usually employed in ecology and evolution. At its heart, considering evolution as a process dependent on “collective stability” is somewhat different than considering evolution as simply a “selective process” (unless you think that selection is for stability, which is not the way we usually talk about selection). It is interesting that a social science researcher considering system dynamics with relevance to biological evolution would focus on stability as an evolutionary goal or force.
Beyond biology there are also interesting applications to both history and politics in this book. One unique thing about Axelrod is that he not only bravely applies his findings to both biologically- and culturally-evolved phenomena, but does so while maintaining a credible understanding of both realms. He discusses how trench warfare during World War I eventually evolved systems of reciprocal restraint. He also has some interesting thoughts on how his findings should be applied by those who play PD-like games (pretty much anyone who has social interactions that are not zero-sum) and for the policy-makers who create the social rules and regulations that essentially determine the payoff matrix for one-on-one interactions.
My version of Axelrod’s book comes with a new foreward by Richard Dawkins, which is well-written and gives a nice account of the importance of the book. Of course I also see a little irony in the connection of Dawkins with Axelrod’s work, as so much of the work inspired by selfish gene theory has ignored the frequency-dependence, spatial structuring, and environmental contingency central to Axelrod’s findings. I cannot entirely lay the blame for this misinterpretation of selfish gene theory on Dawkins’ doorstep, but more people claiming discipleship to Dawkins ought to read Axelrod’s book with care.
Beyond covering a lot of ground, the book is also really well-written, explaining each concept in a concise but thorough manner to a reader who is expected to be mildly mathematically-savvy and modestly familiar with biological phenomena. A political scientist whose interests veered into the biological realm (which frankly ought to happen more often), Axelrod chose to write for all his potential audiences. Because of the interdisciplinary nature of this work, it had to be written in an accessible manner.
My major impression after reading this was “why haven’t we accomplished more since the mid-eighties?”. Axelrod was dealing with very little computing power and a tiny computer-literate community. There was no internet at the time so he (get this!) conducted his tournament using advertisements in magazines to solicit remote participation. Obviously this work could not have been done without computers, but I wonder why we have not built more thoroughly upon these findings now that computers are cheap, powerful, and ubiquitous. What seems pretty clear is that computing power alone is not what is needed to make meaningful models of this type. Disappointingly, far too many current-day theoretical models designed to look at social behaviors are more simplistic than Axelrod’s: they do not allow for complex contingent behavior, they assume that players have knowledge of global information, and they model populations as well-mixed. Given Axelrod’s findings that these factors greatly influence social behavior, it is disappointing how little we have improved on his models.
On the flip side, the legacy of Axelrod’s work can be clearly seen in the near-ubiquity of the Prisoner’s Dilemma (PD) in studies of cooperation. Although reading his work and seeing how broadly the PD model can be applied to consider potentially-reciprocal social interactions renews my faith in the model, I wonder whether it deserves to be so central to theories of cooperation. In particular I am concerned about the assumption that all interactions are dyadic and symmetrical, which probably fails to consider the dynamics of important behaviors such as group hunting (from canids to sailfish to cetaceans), cooperative breeding, microbial cooperation, and much of human social and political behavior. Other models such as the Volunteer’s Dilemma do provide insight into more generalized communal cooperation, but I have not seen these models explored with the investigative clarity applied to the PD by Axelrod. There is a lot to be emulated and extended in Axelrod’s classic work.
Altruism, Behavioral Ecology, Books, Coevolution, Cooperation, Cultural Evolution, Evolution, Evolutionary Modeling, Game Theory, Human Evolution, Individual-based Models, Interdisciplinarity, Multilevel Selection, Mutualism, Political Science, Public Policy, Reciprocity, Sociology, Spatially Explicit Modeling