Robert Kurzban

The Evolutionary Psychology Blog

By Robert Kurzban

Robert Kurzban is an Associate Professor at the University of Pennsylvania and author of Why Everyone (Else) Is A Hypocrite. Follow him on Twitter: @rkurzban

Mice Managing Mistakes

Published 24 October, 2012

Last week I attended a conference called “Lying: The Making of the World” at Arizona State University. Speakers were drawn from across both the sciences and humanities, from biology to literature, and included people likely to be familiar to readers of this blog, including Robert Trivers, Martie Haselton, Bill von Hippel, and , cough, me. The subject of the presentations varied widely, from deceptive coloration in animals to a gripping account of the hoax in which an American man posed as a lesbian Syrian woman in his blog, including posting a report that the fictional woman had been kidnapped.

Leaf insect. Looking like a leaf.

I had a number of exchanges with people at the conference that left me with a feeling that I have had before. I seem to disagree about some issues with people with whom I’m usually in reasonably close agreement and, further, it seemed difficult to identify where views diverged. So, although these issues have been addressed in published work (see the references section at the end), I thought I’d try again here because I’m still not sure exactly where the disagreements lie.

The issue at stake is one surrounding decision making, and the problem of what many in the evolutionary community have come to call “error management.” The basic question is the nature of systems designed to make good (adaptive) decisions under uncertainty given diverse cost/benefit profiles. The initial paper by Haselton and Buss has stimulated a tremendous amount of work in this area, starting what has become a robust and productive research area.

Here, I will argue here that there are two distinct and distinguishable ways to solve the basic problem of managing errors. This is my main point, simply distinguishing these two methods. As an ancillary matter, I’ll suggest that one way of doing this has advantages and should, everything else equal (a potentially important caveat), be expected in actual evolved decision-making systems.

To get at these issues, I’ll use a simple example, a mouse seeing a piece of cheese, separated from her by a potentially dangerous trench. Should she risk the jump across the chasm to get the cheese or not? I frame the decision problem like any other: the potential benefit, B, is the value in the cheese, which depends on its size. The potential cost, C, is the damage from the fall if she doesn’t make it all the way across, which depends on, say, the depth of the trench. The probability of getting the cheese, p, depends on the size of the chasm, with the probability getting smaller as the trench gets larger.

The mouse – I’ll call her Minnie – only wants to try the jump when the expected value of jumping is positive. The focus on expected value quickly eliminates from consideration a decision rule in which Minnie jumps when the chance of making it across are better than .5. Minnie is emphatically not trying to maximize the difference between the number of times she makes it across (hits, one might say) minus the number of times that the fails to do so (misses). I hope it is clear that any reasonable decision rule she uses must take into account the probability of success as well as the relevant costs and benefits.

Yay, cheese!

I also wish to be clear that I am not claiming that Minnie can estimate these costs, benefits, and probabilities with certainty. Minnie can see the cheese, and estimate its size. She can see the trench, and estimate the width and depth of the pit. From these percepts she can estimate the magnitude of all the relevant parameters, but of course with error. Nonetheless, Minnie’s mind can compute, from the percept, her best estimate of the parameters. (I leave aside for this post whether Minnie can also compute the magnitude of error in her estimate. I just note in passing that in the limit, if we suppose that Minnie literally has no idea at all of the estimate of the probability that she will be able to get across the trench, she ought to assume that the probability is .5; if one has no idea how likely two mutually exclusive events are, then the assumption should be that they will occur with equal probability. In such a case, she should jump when B > C.) In any case, Minnie has some means to estimate, from what she sees or, perhaps, smells, the costs, benefits, and odds of success.

So, how should she make a decision on any given day that she encounters the cheese sitting across the trench? One possibility is as follows. She could first estimate the benefit of the cheese (from its size), the cost of the fall (from the depth), and the probability of success (from the width of the trench), giving her the best possible estimate that she can make of B, C, and p. The quantity she wants to know is whether the expected value of the benefit of trying to jump (p * B) is greater than the expected value of the cost of trying to jump ((1-p) * C)). Minnie’s mind could be designed to compute (p*B)-((1-p) * C) and jump when this quantity (the expected value of jumping) is greater than zero. She jumps, then, based on this expected value computation. It should be clear that as B gets large (large cheese), Minnie will correctly choose to jump even when p is small – i.e., wide trenches – if B is big enough relative to C. To connect to the theory at stake here, Minnie is managing her errors in such a case by avoiding potentially costly misses, choosing not to jump with there is a very big piece of cheese across the way, even if the trench is large. Holding aside any additional considerations, no decision rule can do better than this one because all it does is estimate the expected value of jumping from the information available.

Were Minnie to run through those computations, and then add a little bit to her perception of the chances of success when the cheese is especially big, she’s making a mistake. She has already accounted for the size of the cheese in her decision rule, jumping when the cheese is big even if the odds are low. Increasing her estimate of p – increasing her “confidence” or being “optimistic” – will cause her to make some negative expected value jumps. Over time, such designs will be punished by natural selection, and such Minnie Mice will lose, on average, to appropriately confident Minnies. I invite readers who disagree with any of the material to this point to comment.

This relates to a second way she could make her choice. She could compute B, C, and p, as before. However, she updates her confidence about making it across –  her estimate of the chance of success – from p to p’, a number that exaggerates the chance of success when B is larger than C, and under-estimates the chance of success when C is larger than B, and takes this to be the chance of success. She then bases her decision to jump on this updated value, using the output, p’, as her decision rule. That is, she might, for instance, only jump with p’ is greater than .5. Note that she’s choosing to jump using the probability of success rather than, as in the prior case, the expected value of success. (A related method is to update p to p’, and again choose on the basis of the expected value. My analysis for this method would be the same as the one below.)

How does she do this? It should be clear that in this scenario Minnie wants to make exactly the same choices as the Minnie in the prior version. These decisions will, as we’ve seen, maximize expected value. So, she has to increase p’ – her belief about the chance of success – in such a way that it is greater than .5 (or whatever threshold is preferred) whenever the expected value of success is positive. She needs a transformation rule that modifies her estimate of success upward as the cheese gets larger and downward as the trench gets deeper. There are many ways to do this. One way would to be follow the same procedure as above, and if the expected value is positive, update the estimate of success to .6 (or some other value > .5). (There are other ways to compute p’ from p, B, C, and a given threshold value. This exercise is left to the reader.)

In sum, the two methods of deciding whether or not to jump are, first, to compute the expected value and choose based on the result of this computation or, second, to change one’s representation of the true probability of success to a false one, and choose using a decision rule that uses this false probability estimate, but still maximizes expected value.

Importantly, the value p’ is a false belief. It is an inaccurate representation of the probability of success. Minnie is wrong (but, to my way of thinking, in no interesting sense “self-deceived”) about how likely she is to succeed. Indeed, the second method above might strike some readers as perverse. The system has computed the expected value, and then thrown away a true estimate in favor of a false one. Disposing of the true belief in favor of the false one carries certain complications. For example, suppose a Minnie who uses this latter decision process is faced with a trench and sees a small piece of cheese on the other side. The benefit being small, she underestimates her chance of success, and correctly stays on her own side, (falsely) believing that she is unlikely to make it across. Now a cat comes along, and she must choose between crossing the trench and other escape options. She will underestimate how good an option jumping across is because of her false belief. Compare this to a Minnie who uses the first method. She has accurately estimated her chances, and will correctly choose the right escape route based on the correct chances of escape for each option.

To summarize, one way to manage errors is to compute expected value, maintaining the best possible estimates of what is true. A second way is to introduce false beliefs, and choose on the basis of one of these false computations.

I’m not saying that this second kind of system does not, or cannot, exist. There could be reasons that systems of the second type might have evolved. For example, suppose that Minnie often jumps in view of Mickey, Donald, and Goofy, and they value mice who are brave. If Minnie projects courage by making negative expected value jumps, and this translates into fitness advantages, then this benefit might offset the cost of the false belief. (Again note, however, that Minnie could simply add reputational advantages to the Benefit side of the computation, and use the first method.) I certainly take the point that there can be value in updating others’ beliefs, and that this could influence decision making.

But, everything else equal, the first system seems, to me, to be the one that we should expect to observe in organisms. False representations, such a p’, won’t be as useful in decision making as true representations, p. If multiple systems, for instance, consult this probability, then the error will introduce problems for those systems that consume this false representation. More generally, false representations are less useful than true representations for decision making purposes. Of course, false representations might be useful for other purposes, such as persuasion. I have tried to make my views on this matter as clear as I could, writing about this issue at some length in some recent published work of mine.

And, again, to reduce the chance of being misunderstood, I am not saying that all systems should function optimally or perfectly. I am saying that, ceteris paribus, the first of the two systems I discuss here should be expected.

It is for this reason that I am skeptical of arguments that suggest that people should be expected to be “overconfident” or overly “optimistic” in the service of solving the problem of managing errors. A better way to manage errors is to be correctly confident or appropriately optimistic, and choose in a way that reflects the expected value of the options available.

References

Haselton, M. G., & Buss, D. M. (2000). Error management theory: a new perspective on biases in cross-sex mind reading. Journal of personality and social psychology78(1), 81.

Haselton, M. G., & Nettle, D. (2006). The paranoid optimist: An integrative evolutionary model of cognitive biases. Personality and Social Psychology Review10(1), 47-66.

Kurzban, R. (2011). Why everyone (else) is a hypocrite: Evolution and the modular mind. Princeton University Press.

McKay, R. T., & Dennett, D. C. (2009). The evolution of misbelief. Behavioral and Brain Sciences32(6), 493. (Please see also the comments on the target article.)

McKay, R., & Efferson, C. (2010). The subtleties of error management. Evolution and Human Behavior31(5), 309-319.

Pinker, S. (2011). Representations and decision rules in the theory of self-deception. Behavioral and Brain Sciences34(1), 35-37.

Von Hippel, W., & Trivers, R. (2011). The evolution and psychology of self-deception. Behavioral and Brain Sciences34(1), 1-56.

  • discoveredjoys

    I’m going to quibble here (sorry). The decision making process you outline are fine, but they are implicitly based on linear symbolic logic. I am far from sure that a mouse (or for that matter a human) would think in this clear cut way – because it is slow.

    I seem to recall that the senses stir up a lot of neural networks and then the excitory and inhibitory impulses and networks combine and a decision about the most appropriate action emerges from the consensus. Much of this ‘calculation’ is unconscious, parallel, and very quick. I’m quite sure you know all this.

    But one of the consequences of this type of weighted thinking is that the memories of successfully jumping and not successfully jumping need not sum to 100%. Similarly the ‘attraction’ of the cheese may be modified by priming, either positively (by hunger, or needing to care for pups) or negatively (the smell of cat urine etc.).

    None of this changes the logic of your argument, but I suspect evolution has produced a different kind of decision process.

  • David Pinsof

    I buy this argument 100%, however it rests on a somewhat shaky assumption that beliefs are variables in decision rules. This might be true in some cases, but it also might be wrong in some cases. For instance, I can believe that a nonpoisonous snake is harmless, while still being behaving as if the snake were harmful; and I can behave as if the fast food were good for me, while still believing that it is bad for me. So it seems that in these cases, beliefs can be decoupled from the decision rules that guide behavior. It’s an open question why this is the case, but I think something like this might be going on with error-management-type findings.

    For instance, say the part of Minnie that talks (the “public relations” module) is asked, on a likert-style scale, how likely she is to succeed in jumping over a wide trench to get a large piece of cheese. Further, say that this part of Minnie does not have access to the decision rule guiding her jumping behavior. All she has access to is the expected value of jumping, which is experienced by this part of Minnie as a feeling of “wanting to jump.” Since all she has to go on is this feeling of “wanting,” she will assume that “wanting” correlates with “probability” and give you an upwardly biased probability estimate. Experimenters will then say that Minnie is “overconfident,” and that she is adaptively managing her errors, when really the experimenters were just asking the wrong part of Minnie’s brain.

    • rkurzban

      My assumption is that there is *some* representation that underlies the decision. I’m interested here in that representation. I am very sympathetic to the idea that there are, in addition to the for-the-decision representation, additional representations that have other jobs, and part of their jobs might have to do with public relations. Again, my interest here is the representation that plays a role in the decision making, allowing for the possibility that other representations are in Minnie’s mind as well.

  • Christian T

    I’m sure this point has been brought up before but; Say the trench in front of Minnie was deep enough to be fatal if she failed. How do you put a number to the cost of dying? Even if it isn’t an infinite number, shouldn’t the number be so high the cheese would not be worth the risk even if the risk of failure was a mere 1%? It seems obvious animal thinking could not function like that.

    • Guest

      I think you are neglecting the fact that the mouse may well die if it does not eat. So it’s decision rule about jumping would also include some function for how many calories (energy) it currently has along with the probability or running into food in the near future (this would go into the B term above). If both of those things are low, then the mouse may risk its life for food. Indeed, mice probably do have to make this decision all the time when they scurry across a field in search of food–hawks are a bitch.

  • Justin

    Johnson and Fowler (2011) http://www.nature.com/nature/journal/v477/n7364/full/nature10384.html
    propose that overconfidence maximizes individual fitness even though it has a number of negative consequences. They test their theoretical proposition in a simulation.

    A major theme in my research is that individuals are often overconfident and this often leads to negative consequences. However, a recent idea I have been struggling with is that not everyone who believes in themselves (likely extremely biased upwardly) will become successful in some area, but all people who became successful in some area exhibit overconfidence. There are a number of anecdotal accounts of this. For example, supposedly James Joyce’s The Dubliners was rejected by multiple publishers and now is considered one of the greatest collection of short stories (for those not into classic Irish literature–the same is supposedly true for Harry Potter).

    However, as a friend who had a lot of friends in college who wanted to be writers, there are thousands of writers who think they are great and think they will publish a great book one day. Of course almost all of them will not. Therefore, it would be a horrible strategy to go into a college classroom and tell them they are going to be great writers one day.

    In academics, most of the top journals in our field have rejection rates above 93%. Therefore, I should rarely ever send articles to these journals because it is extremely likely it will not get in (also the value is not exactly clear in some cases). However, I of course do send them–as well as everyone else. Therefore, when I send in a paper I am overconfident.

    Is it possible that if Minnie did not falsely inflate her confidence in jumping over the cliff she would never jump and therefore die?

    I am confident that I do not know the answer yet.

  • Pingback: (Not So) Simple Jury Persuasion: Beauty And Guilt | Pop Psychology

  • Pingback: Defining Deception | Evolutionary Psychology

  • Pingback: An Implausible Function For Depression | Pop Psychology

  • Pingback: The Power of Prayer | Evolutionary Psychology

  • Pingback: Of Course Religious Communities Cooperate. But Maybe… | Evolutionary Psychology

  • Pingback: Gender Gaps Vs. Gender Facts | Pop Psychology

  • Pingback: Up Next On MythBusters: Race And Parenting | Pop Psychology

Copyright 2012 Robert Kurzban, all rights reserved.

Opinions expressed in this blog do not reflect the opinions of the editorial staff of the journal.

Evolutionary Psychology - An open access peer-reviewed journal - ISSN 1474-7049 © Ian Pitchford and Robert M. Young; individual articles © the author(s)
Close


You're in!