In his new book, Thinking Fast and Slow, Princeton professor and Nobel Laureate Daniel Kahneman describes how, as a psychologist serving in the Israeli army, he selected candidates for officer training based on their success in a series of leadership tests. Despite his own and his colleagues confidence in their choices, “the evidence was overwhelming”: they were no good at predicting success at all. Kahneman explains:
You may be surprised by our failure: it is natural to expect the same leadership ability to manifest itself in various situations. But the exaggerated expectation of consistency is a common error. We are prone to think that the world is more regular and predictable than it really is, because our memory automatically and continuously maintains a story about what is going on, and because the rules of memory tend to make that story as coherent as possible and to suppress alternatives. Fast thinking is not prone to doubt.
The confidence we experience as we make a judgment is not a reasoned evaluation of the probability that it is right. Confidence is a feeling, one determined mostly by the coherence of the story and by the ease with which it comes to mind, even when the evidence for the story is sparse and unreliable. The bias toward coherence favors overconfidence. An individual who expresses high confidence probably has a good story, which may or may not be true.
I coined the term “illusion of validity” because the confidence we had in judgments about individual soldiers was not affected by a statistical fact we knew to be true — that our predictions were unrelated to the truth. This is not an isolated observation. When a compelling impression of a particular event clashes with general knowledge, the impression commonly prevails. And this goes for you, too. The confidence you will experience in your future judgments will not be diminished by what you just read, even if you believe every word.
The degree to which our narratives overwhelm the evidence in front of us is a strong argument in favor of a close focus on narrative-making itself in efforts to improve decision making. One source of cognitive coherence surely lies in our mainstream meta-narrative about the process of decision-making itself. As psychologist Lee Roy Beach has pointed out, “decision-makers … frame [decisions] as choices of courses of action to which they expect to devote time, energy, and good judgment in an effort to make sure they turn out the way they want them to. In short, they view decisions as tools for actively managing the future so it conforms to their values and preference.”
In narrative terms, decision-makers see themselves as omniscient authors able to hover over the scene at hand. This doesn’t make them arrogant–there are many cultural cues that promote this model, not least the enduring Enlightenment view of individuals as autonomous actors who can drive their own fates.
Not that we can’t to a degree–but it takes some negotiating with a messier, more complex environment than the omniscient author model permits. We could view the decision making scene differently, as one in which those making decisions are not only authors, but always also actors, and never omniscient. This immediately forces us to see the unfolding narrative differently, as one in which there are multiple actors jostling to control next steps, and in which biases and partial knowledge are an inevitability, not something that can be overcome with greater quantities of data.