Evolutionary Explanations for Altruism and Morality: Some Key DistinctionsPublished 14 May, 2012
Last Thursday-Saturday, I attended a Workshop entitled Positive Models and Normative Ideals of Social Cooperation at Princeton University. I was asked to write a précis for the workshop on the evolution of altruism and morality. I assumed as I wrote it that the short piece would be read by people with a diversity of backgrounds, and so I tried to keep things relatively simple. The organizers of the workshop don’t intend to publish the set of pieces written for it, so I thought I would just post it here, slightly edited from the original. Other papers, including two from Steve Pinker, with whom I shared the first session, all also available on the page I linked to above.
My goals for this brief essay are necessarily modest. First, I discuss some key distinctions surrounding the phenomena one might try to explain with respect to altruism, cooperation, and morality. Second, I discuss some candidate explanations for these phenomena. Note that the presentation here is not intended to veridically represent the views of all practitioners in the relevant fields; it should be understood that this area remains controversial, and there are a diversity of views.
Here are three different questions that one might ask surrounding altruism:
- Phenotypic design. Why are organisms’ parts organized to deliver benefits to other organisms?
- Psychology & Phenomenology. Why do organisms (feel, to themselves, or appear, to others, as though they) choose to deliver benefits to other organisms?
- Observed behavior. Why do organisms occasionally act in ways such that they benefit other organisms at a cost to themselves?
Distinguishing these questions directs attention to different sorts of phenomena and different sorts of explanations. Consider the well-known photograph of an upstream-swimming salmon flopping into the open jaws of a waiting bear (Figure 1). In this transaction, the fish has endured a fitness cost (death) and the bear has benefited (calories), an act of “altruism” when altruism is defined in terms of the behavior (as in question 3). However, no part of salmon physiology is designed for bear-feeding (as in question 1). (To put it another way, the genes that cause the salmon to swim upstream in the way that salmon do did not replicate faster than alternative alleles by virtue of their having caused their owners to be eaten by bears.)
In contrast, mammary glands located in female bears are altruism devices. They can be recognized as such because these tissues are organized in such a way that they elegantly execute their function of delivering calories to offspring (Williams, 1964). They contain highly nutritious solutions and tubes that extend from reservoirs of this solution to the exterior world to afford efficient delivery to suckling offspring. Altruism devices can be recognized by investigating their properties and relating these properties to a putative benefit-delivery function.
Benefit-delivery mechanisms can evolve through diverse pathways. Recently, Hansen et al. (2012) showed that distinctive triangular white markings on the petals of irises function to guide the proboscis of insects so that these insects could accurately position themselves to obtain the nectar within the flower; this design is favored because by delivering this benefit to insects, pollination is facilitated. (I wrote a little post about this.)
Humans, of course, are biologically striking in the array of benefits they confer on other humans. Given the above distinctions – and bearing in mind Adam Smith’s admonition regarding butchers and brewers – not all acts that result in benefits to others necessarily entail that such acts are produced by altruism devices. They might be, but such issues must be arbitrated empirically.
Still, the human mind does, as an empirical matter, appear to contain altruism mechanisms. Humans deliver benefits to close relatives (Burnstein et al., 1994), and humans form “friendships,” characterized by the delivery of benefits of various sorts, a robust phenomenon which points to the existence of altruism mechanisms (Silk, 2004). Perhaps most strikingly, humans endure costs to deliver benefits to others, as in “group” activities such as barn raisings and warfare. Candidate explanations for these behaviors are reviewed briefly below.
Moral Devices & Punishment.
Historically, benefit delivery by humans and “morality” have been blurred;Darwinidentified the question of why people cooperate with the question of why people are “moral” or “virtuous.” Here I distinguish between these two phenomena, leaving as an open possibility that they are tightly related to one another.
Most organisms show little interest in acts by conspecifics that don’t affect them. In sharp contrast, humans identify others’ acts as “wrong,” and reliably indicate a desire that costs be imposed on those who commit such acts; this is the case for unrelated individuals, even those in other groups, and the desire for punishments extends even to acts that do not, in themselves, do anyone any harm (e.g., using prohibited words). Some take this as the central puzzle of morality: why do people evaluate acts on the right/wrong dimension, and why is there an accompanying desire for the imposition of costs (i.e., punishment)?
This raises another key distinction which can be expressed with two additional questions:
4) Revenge. What is the function of a (putative) psychological mechanism designed to impose costs on organisms who recently imposed costs on (or refused to benefit) them?
5) Moralistic Aggression. What is the function of a (putative) psychological mechanism designed to impose costs on individuals who committed a “wrongful” act?
In the non-human animal world, revenge is common, and a typical interpretation of observations of vengeful behavior is deterrence (Clutton-Brock & Parker, 1995). To the extent that an organism signals that it will impose costs on another organism conditional on harm to itself, harm is deterred (McCullough et al., in press).
In contrast, third-party punishment, or moralistic aggression, is rare among non-human animals. This is not to say that it is absent; for instance, von Rohr et al. (2012) recently proposed that chimpanzees occasionally intervene as “impartial” third parties in conflicts.
A final distinction surrounding morality can be captured with an additional question:
6) Conscience. Why do human minds have (putative) psychological mechanisms that cause them, ceteris paribus, to avoid engaging in norm-violating behavior?
From these distinctions, it should be clear that one can ask both why the mind is designed to deliver benefits and also ask why the mind is designed to follow moral norms.
Are Moral Devices Altruism Devices?
What are the relationships among these questions? One prominent view is that moralistic aggression (5) helps to explain altruism in groups in humans. Because this idea is the specialty of another workshop participant (Boyd & Richerson, 1992, 2005), I will not discuss this in any depth.
It should be clear that an explanation for moralistic aggression (5) has the appealing property that such an explanation will naturally illuminate conscience (6). That is, once we have explained why humans impose costs on those who commit “wrongful” acts, then we should simultaneously be able to explain why people choose not to commit such acts: because doing them will lead to punishment. That is, conscience mechanisms, psychological systems that cause people to avoid moralized acts, can be understood as defense systems in a social ecology that includes moralistic aggression (DeScioli & Kurzban, 2009).
While these distinctions seem very straightforward, it is important to note that the scholarly literature at the intersection of evolution and morality occasionally blurs the lines between these questions. Following Darwin, contemporary researchers have suggested that the explanation for morality are explanations for altruism (Wright, 1994). Similarly, Haidt and colleagues (e.g., Haidt & Joseph, 2008) understand questions surrounding the evolution of morality to be answered by explanations for why people choose certain behaviors over others, theories about conscience; such explanations do not in themselves address moralistic aggression, implicitly assuming that the explanations for conscience and moralistic aggression are the same.
Volumes have been written on explanations for morality and altruism among humans, so this short précis necessarily provides only the most superficial account of these explanations.
Kin selection (Hamilton, 1964) explains why some features of organisms are designed to deliver benefits to other organisms. This theory explains, specifically, why organisms have design features that cause them to deliver benefits at a cost to organisms closely related by descent. The theory of kin selection explains the structure of mammary glands, for example.
Reciprocal altruism (Trivers, 1971) is a second well-known theory that explains how organisms can come to be designed to deliver benefits to others. This theory explains how organisms can come to have mechanisms designed to deliver benefits to other organisms if, over the evolutionary history of the organism in question, certain conditions were met. In particular, such benefit delivery systems can persist if the delivery of benefits reliably led to return benefits. Some have argued that the human mind contains mechanisms selected by virtue of reciprocal altruism (Cosmides & Tooby, 1992); some have proposed that friendship psychology, for example, is a manifestation of the effects of reciprocal altruism (Shackelford & Buss, 1996).
Costly signaling has been proposed as another possibility. This model begins with the idea that people choose others as mates, exchange partners, and allies based on visible cues that provide information about their properties and, therefore, their value as partners in these domains. Drawing on costly signaling theory, (Grafen, 1990; Zahavi, 1975), some authors have suggested that there has been selection for the inclination to deliver benefits to others because displays of altruism are difficult to fake (Gintis et al., 2001; Roberts, 1998) and, therefore, provide reliable information about the altruist. In particular, the ability to deliver large benefits (at large costs) honestly signals the ability to do so, insofar as those with little to give are unable to do so. This view locates the return benefit to delivering benefits to others in the gains that are derived from winning competitions over the choice of partners across a range of social domains.
Recently, indirect reciprocity has been proposed as an additional pathway to altruism mechanisms in sizable groups (Nowak & Sigmund, 2005; Panchanathan & Boyd, 2004). Put roughly and briefly, in these models, individuals with a reputation as cooperators – having behaved altruistically at time 1 – are aided by other agents who only help those with such reputations at time 2. In this way, agents who are altruistic at time 1, maintaining a good reputation, enjoy greater fitness than those who do not, leading to selection for altruism even though the return benefits are not direct, as in reciprocal altruism.
Two final explanations are genetic group selection and cultural group selection. The former explanation relies on the same logic as kin selection, above; a given allele increases its own replication rate by virtue of the fact that groups with more (genetic) altruists will do better than groups with fewer genetic altruists, leading to between-group selection for the altruism gene. The latter explanation, cultural group selection, refers to a process by which groups with certain beliefs (memes, institutions, etc.) do better than other groups because of their positive effect on the success of the group. At the risk of over-generalizing, and with some notable exceptions (e.g., E. O. Wilson, 2012; D. S. Wilson & Sober, 1999), most scholars remain unconvinced that genetic group selection has been an important force in giving rise to human altruism devices. In contrast, cultural group selection enjoys considerably broader support.
As indicated above, some have suggested a tight link between altruism devices and morality devices. These views suggest that people’s interest in punishing others is driven by the role that punishment plays in eliciting altruistic behavior. That is, the function of the desire to punish, on such views, is to increase others’ degree of cooperation. The advantage of punishment derives from the increased altruism elicited by others. This solution immediately points to the well-known second order problem of free riding on others’ punishment, a challenge which has been addressed by a number of models (e.g., Henrich & Boyd, 2001). In slight contract, and controversially, Price et al. (2002) have proposed that the desire to punish functions not only to induce cooperation from others, but also to reduce the difference between the punisher’s fitness and the fitness of those that are punished.
A puzzling feature of moralistic punishment is that people punish an array of acts that go well beyond acts that are uncooperative. Punished acts across cultures and eras have included not just those that harm no one – combining particular categories of food, for instance – but even acts that, if performed, would give rise to aggregate benefits (such as charging interest on loans). (See Haidt & Joseph, 2008, for one discussion of diversity in moral rules.)
While it is of course possible that moralistic punishment systems are designed to induce pro-social behavior and are simply “misfiring” as a result of cultural processes (e.g., Hagen & Hammerstein, 2006), recently DeScioli and Kurzban (2009, in press) have proposed a different route. They propose that moral condemnation is designed to choose sides when conflicts arise within a group.
DeScioli and Kurzban (in press) assume that conflicts arose frequently in groups, and that conflicts posed a problem for third parties observing the conflict. In many species, when such conflicts arise, observers side with the dominant, leading to a dictatorship. In humans, however, people do not always side with the more dominant individual (cf. Boehm, 1999). Instead, people frequently will use the behavior of the acts in question (as opposed to the formidability or status of the disputants) to choose which disputant to back. When all third parties to a conflict use the same decision rule to choose sides, these third parties can minimize their costs because all conflicts will be settled with a highly asymmetrical contest.
Choosing sides based on actions changes the problem from a public goods problem to a coordination problem in which agents are better off choosing based on actions rather than formidability (or pre-existing ties). For this reason, DeScioli and Kurzban (in press) consider morality to be best understood as a “dynamic coordination” strategy, in which third parties use the acts of the disputants to choose sides. On this view, moral judgment is designed to pick out the individual that others will side with, and moralistic punishment intuitions are designed to signal to other third parties which side one is taking, a view that connects to recent models that emphasize the role of coordination among third parties (Boyd et al., 2010).
Debate continues about the function of human morality and the relationship between moral systems and altruism systems. While historically some have regarded the two as equivalent, others have proposed that they are distinct, but functionally related to one another.
Boehm, C. (1999). Hierarchy in the forest.Cambridge,MA:HarvardUniversity Press.
Boyd, R., Gintis, H. & Bowles, S. (2010) Coordinated punishment of defectors sustains cooperation and can proliferate when rare. Science, 328, 617–20.
Burnstein, E., Crandall, C., & Kitayama, S. (1994). Some neo-Darwinian decision rules for altruism: Weighting cues for inclusive fitness as a function of the biological importance of the decision. Journal of Personality and Social Psychology, 67, 773-789.
Bliege Bird, R., E. A. Smith, & Bird, D. W. (2001) The hunting handicap: costly signaling in male foraging strategies. Behavioral Ecology and Sociobiology, 50, 9-19.
Boyd, R. and Richerson, P. J. (1992). Punishment allows the evolution of cooperation (or anything else) in sizable groups. Ethology and Sociobiology 13, 171-95.
Boyd, R., & Richerson, P. J. (2005). Not by genes alone: How culture transformed human evolution.University ofChicago Press.
Clutton-Brock, T. H., & Parker, G. A. (1995). Punishment in animal societies. Nature, 373, 209-216.
Cosmides, L. & Tooby, J. (1992). Cognitive adaptations for social exchange. In J. Barkow, L. Cosmides, & J. Tooby (Eds.), The adapted mind: Evolutionary psychology and the generation of culture.New York:OxfordUniversity Press.
DeScioli, P., & Kurzban, R. (2009). Mysteries of morality. Cognition, 112, 281-299.
DeScioli, P., & Kurzban, R. (in press). A solution to the mysteries of morality. Psychological Bulletin.
Gintis, H., Smith, E. A., & Bowles, S. (2001). Costly signaling and cooperation. Journal of Theoretical Biology, 213, 103-119.
Grafen, A. (1990). Biological signals as handicaps. Journal of Theoretical Biology, 144, 517-54.
Haidt, J., & Joseph, C. (2008). The moral mind: How five sets of innate intuitions guide the development of many culture-specific virtues, and perhaps even modules. In P. Carruthers, S. Laurence, & S. Stich (Eds.), The innate mind, volume 3. (pp. 367-391).New York:OxfordUniversity Press.
Hagen, E. H., & Hammerstein, P. (2006). Game theory and human evolution: a critique of some recent interpretations of experimental games. Theoretical Population Biology 69, 339–48.
Hamilton, W. D. (1964). The genetic evolution of social behavior. Journal of Theoretical Biology, 7, 1-52.
Hansen, D. M., Van der Niet, T., & Johnson, S. D. (2012). Floral signposts: testing the significance of visual ‘nectar guides’ for pollinator behaviour and plant fitness. Proceedings of the Royal Society – B, 279, 634-639.
Henrich, J. & Boyd, R. (2001) Why people punish defectors: Weak conformist transmission can stabilize costly enforcement of norms in cooperative dilemmas. Journal of Theoretical Biology (208):79–89.
McCullough, M. E., Kurzban, R., & Tabak, B. A. (in press). Cognitive systems for revenge and forgiveness. Behavior & Brain Sciences.
Nowak, M., & Sigmund, K. (2005) Evolution of indirect reciprocity. Nature, 437, 1291–1298. Panchanathan & Boyd, 2004
Price, M. E., Cosmides, L., & Tooby, J. (2002). Punitive sentiment as an anti-free rider psychological device. Evolution and Human Behavior, 23, 203-231.
Roberts, G. (1998). Competitive altruism: From reciprocity to the handicap principle. Proceedings of the Royal Society B, 265, 427-31.
Shackelford,T .K.,& Buss, D. M. (1996). Betrayal in mateships, friendships, and coalitions. Personality and Social Psychology Bulletin, 22, 1151–1164.
Silk, J. B. (2003). Cooperation without counting: the puzzle of friendship. In: The Genetic and Cultural Evolution of Cooperation (P. Hammerstein, ed.), Dahlem Workshop Report 90.Cambridge,MA, The MIT Press, pp. 37-54.
Trivers, R. L. (1971). The evolution of reciprocal altruism. Quarterly Review of Biology, 46, 35-57.
Williams, G. C. (1966). Adaptation and natural selection. Princeton:PrincetonUniversity Press.
Sober, E. & Wilson, D. S. (1998). Unto others: The evolution and psychology of unselfish behavior.Cambridge,MA:HarvardUniversity Press.
von Rohr C. R. , Koski S. E. , Burkart J. M., Caws C., Fraser O.N., et al. (2012). Impartial third-party interventions in captive chimpanzees: A reflection of community concern. PLoS ONE 7(3): e32494.
Wilson, E. O. (2012). The social conquest of Earth.New York: Liveright.
Wright, R. (1994). The moral animal.London: Little Brown.
Zahavi, A. (1975). Mate selection—A selection for a handicap. Journal of Theoretical Biology, 53, 205-214.