Scientific American “Why We Help”
Posted 21 Jun 2012 / 1The July issue of Scientific American features a cover story written by Martin A. Nowak called “Why We Help“. This very short article contains a brief review of Nowak’s “five rules” for cooperation, a little bit of connection to experimental work in real organisms, and some hazy conjecture concerning what makes humans cooperate. It seems as though every eight or so years an alarm rings at Scientific American headquarters and some editor is reminded to seek out an article on cooperation. Nowak is a favorite, having produced a number of previous articles (Nowak et al. 1995, Sigmund et al. 2002). Unfortunately, there is very little novel content here: previous articles have been more in depth — actually exploring and explaining the mechanisms that allow cooperation to evolve — but this one is more like a cheerleading session for Nowak’s scientific prowess and perspective.
The article starts off by explaining the prisoner’s dilemma, which Nowak has championed throughout his career as the simplest depiction of a social dilemma and has used as the basis for numerous investigations using computer simulations. The article then provides the reader with a repackaging of his “Five rules” paper (Nowak 2006), with a little bit of promotion for his book SuperCooperators thrown in for good measure. In this review he makes sure to highlight but not explain his objections to kin selection theory. The article also briefly describes public goods games and the tragedy of the commons.
Perhaps the most interesting section of this article is that in which Nowak tries to apply his brief introduction to humans. He opens the paper with the example of a single (meaning without a spouse or children) worker at the Fukashima nuclear power plant who basically sacrificed his future health by exposing himself to massive amounts of radiation so that the disaster at the plant could be mitigated. Clearly this is an act of rather strong sacrifice, not uncommon in human populations, that provokes us to provide an explanation. What is Nowak’s favored mechanism by which cooperation evolves in humans? Here he explains that it is indirect reciprocity, the extensive helping among unrelated humans that is aided by means of assigning and assessing reputation. This is a significant declaration, especially for someone who has been more open to group selection than others. But does this explain his paradigmatic introductory example? Not at all. If that worker at the Fukashima nuclear power plant has just sacrificed his ability to reproduce (and it would probably be a bad idea for him to have children), how can any reputational effect make up for this cost? Why employ this example if you cannot explain it? There are several evolutionary explanations for the behavior exhibited by this self-sacrificing worker:
- He is in some sense abnormal, a mutant whose behavior is likely to be purged from the population (this is the usual explanation for outliers, but is problematic given how persistent self-sacrificing behavior is in human populations);
- His behavior is the result of group selection, wherein those who found themselves in groups with more self-sacrificers in the past produced more offspring than those in groups with low rates of self-sacrifice (possible, although hard to make sense of in this context where the recipient group of this extraordinary altruism was so large and nebulous); or
- His behavior was motivated by cultural teaching and not so much his genetic propensities, and his example will inspire future generations to behave similarly when faced with analogous disasters (note that this is a gene-culture hybrid form of group selection).
Notice that the “reputational effects” required of indirect reciprocity play no real role in any of the explanations provided above.
I think that trying to provide evolutionary explanations for particular behaviors by particular humans is a bit of a fool’s errand, so what about explaining the larger landscape of human society? Nowak takes a stab at that as well. He provides as an example of a social dilemma climate change, a well-worn but valuable and important scnario where cooperation is required to avert population-wide disaster. Citing experimental economics work by Milinski et al. (2008), Nowak makes the claim that so long as we provide enough “authoritative information” and have our “reputation… on the line”, the Milinski paper suggests that we can avert dangerous climate change. Whoa, that is very different conclusion than the one that I came to! Nowak leaves out the most critical finding of this paper, which is that when loss due to climate change is likely to occur at rather low probabilities people are very unlikely to make the necessary sacrifices to prevent catastrophic climate change. And even under these extremely limiting assumptions of small groups working collectively to prevent climate change, the general outcome is insufficient sacrifice. It is unclear how reputation (which mostly has local effects unless it can be scaled up) will have any relevance to attempts to avert climate change.
As I have discussed before, Nowak gets into some danger zones when he extrapolates from his theoretical work. In this article he declares that cooperation is unstable, but optimistically suggests that “the altruistic spirit always seems to rebuild itself”. But how? Isn’t that the point of doing all this research into how cooperation evolves: to find out we might apply scientific insights to the preservation of all the value that cooperation creates? There is almost nothing in this article about how to maximize reputational effects as a means of preserving or fostering cooperation (unless you find the example of a gas bill that compares your consumption to your neighbor’s compelling). It is hard to imagine anyone new to the field coming away with any sense of why we help after reading this article. I must confess that I am bummed that this is the sort of cover article that our field produces.
What’s missing from this article? Well, to be frank, everything theoretical not involving Nowak, who has a tendency to portray progress in our field as having been advanced solely by his work. This tendency becomes most acute in this article, where he leads the reader to believe that all the significant theoretical discoveries have been made by him, that his computer algorithm uncovered the value of the “tit-for-tat” strategy, and that he “discovered” (rather than categorized) the five mechanisms that lead to cooperation. If I were a reader unfamiliar with the rich literature exploring how cooperation evolves, I really might come to the erroneous conclusion that Nowak is the Darwin of cooperation (a misimpression that will only be made worse by reading SuperCooperators). Nowak might be burnishing his reputation with the public with these sorts of articles, but that reputation is going to go in a very different direction within science if he keeps publishing works like this one.
If you are interested in understanding the basic premises of the prisoner’s dilemma, you may want to check out this narrative/interactive PDF, which is part of the Evolutionary Games Infographic Project.
A Major Post, Anthropogenic Change, Articles, Behavior, Climate Change, Cooperation, Evolution, Evolutionary Modeling, Game Theory, Group Selection, Human Evolution, Human Nature, Kin Selection, Punishment, Reciprocity, Social Networks
Hey Chris, nice response piece. I agree with your views on the Nowak phenomenon, and I had a few thoughts regarding the three evolutionary explanations you mention. The first explains the behavior as a function of mutation, which as you say is “the usual explanation for outliers.” But I disagree that the persistence of the behavior suggests that the mutation explanation is problematic. Kurzban (Experiments investigating cooperative types in
humans, PNAS, 2004) has shown that the range of such behavior may be evidence of stable polymorphism in the population, and Verweij et al (Maintenance of genetic variation in human personality, Evolution, 2012) argue that much of the variation in personality traits can be explained through mutation-selection balance. One thing that always puzzled me about self-sacrificing behavior is not the persistence of the occasional self-sacrificing hero, but why, if group selection is accurate, don’t we see this much MUCH more often? It is in no small part because such behavior is RARE that it is heroic, which itself seems to sit awkwardly with what one might expect from a “strong” group selectionist position.
Another issue you made me wonder about is reputation effects which you said play no real role in the examples you give. But it strikes me that whether an individual lives to enjoy the benefits of reputation is not as important as the balance between how often, over the evolution of the trait (self-sacrificing behavior) itself, the individual died from self-sacrificing efforts. If individuals that possessed the trait happened to survive at least marginally more often than die from the effects of such expressed traits, selection could favor the spread of the trait based on the on-average fitness enhancing reputational effects. This is of course notwithstanding the indirect fitness benefits that could extend to kin of the fallen hero. Anecdotally, it seems to be the case that we are inclined to bestow material resources upon the relatives of casualties of war.
What are your thoughts on this