Thinking, Fast and Slow and Radical Uncertainty



Thinking, Fast and Slow

By Daniel Kahneman
 
 
 
Book Review: New York Times, Nov 25th 2011
 

Two Brains Running

By Jim Holt

BYT-2BYT-4In 2002, Daniel Kahneman won the Nobel in economic science. What made this unusual is that Kahneman is a psychologist. Specifically, he is one-half of a pair of psychologists who, beginning in the early 1970s, set out to dismantle an entity long dear to economic theorists: that arch-rational decision maker known as Homo economicus. The other half of the dismantling duo, Amos Tversky, died in 1996 at the age of 59. Had Tversky lived, he would certainly have shared the Nobel with Kahneman, his longtime collaborator and dear friend.

Human irrationality is Kahneman’s great theme. There are essentially three phases to his career. In the first, he and Tversky did a series of ingenious experiments that revealed twenty or so “cognitive biases” — unconscious errors of reasoning that distort our judgment of the world. Typical of these is the “anchoring effect”: our tendency to be influenced by irrelevant numbers that we happen to be exposed to. (In one experiment, for instance, experienced German judges were inclined to give a shoplifter a longer sentence if they had just rolled a pair of dice loaded to give a high number.) In the second phase, Kahneman and Tversky showed that people making decisions under uncertain conditions do not behave in the way that economic models have traditionally assumed; they do not “maximize utility.” The two then developed an alternative account of decision making, one more faithful to human psychology, which they called “prospect theory.” (It was for this achievement that Kahneman was awarded the Nobel.) In the third phase of his career, mainly after the death of Tversky, Kahneman has delved into “hedonic psychology”: the science of happiness, its nature and its causes. His findings in this area have proved disquieting — and not just because one of the key experiments involved a deliberately prolonged colonoscopy.

“Thinking, Fast and Slow” spans all three of these phases. It is an astonishingly rich book: lucid, profound, full of intellectual surprises and self-help value. It is consistently entertaining and frequently touching, especially when Kahneman is recounting his collaboration with Tversky. (“The pleasure we found in working together made us exceptionally patient; it is much easier to strive for perfection when you are never bored.”) So impressive is its vision of flawed human reason that the New York Times columnist David Brooks recently declared that Kahneman and Tversky’s work “will be remembered hundreds of years from now,” and that it is “a crucial pivot point in the way we see ourselves.” They are, Brooks said, “like the Lewis and Clark of the mind.”

Now, this worries me a bit. A leitmotif of this book is overconfidence. All of us, and especially experts, are prone to an exaggerated sense of how well we understand the world — so Kahneman reminds us. Surely, he himself is alert to the perils of overconfidence. Despite all the cognitive biases, fallacies and illusions that he and Tversky (along with other researchers) purport to have discovered in the last few decades, he fights shy of the bold claim that humans are fundamentally irrational.

BYT-2BYT-4Or does he? “Most of us are healthy most of the time, and most of our judgments and actions are appropriate most of the time,” Kahneman writes in his introduction. Yet, just a few pages later, he observes that the work he did with Tversky “challenged” the idea, orthodox among social scientists in the 1970s, that “people are generally rational.” The two psychologists discovered “systematic errors in the thinking of normal people”: errors arising not from the corrupting effects of emotion, but built into our evolved cognitive machinery. Although Kahneman draws only modest policy implications (e.g., contracts should be stated in clearer language), others — perhaps overconfidently? — go much further. Brooks, for example, has argued that Kahneman and Tversky’s work illustrates “the limits of social policy”; in particular, the folly of government action to fight joblessness and turn the economy around.

Such sweeping conclusions, even if they are not endorsed by the author, make me frown. And frowning — as one learns on Page 152 of this book — activates the skeptic within us: what Kahneman calls “System 2.” Just putting on a frown, experiments show, works to reduce overconfidence; it causes us to be more analytical, more vigilant in our thinking; to question stories that we would otherwise unreflectively accept as true because they are facile and coherent. And that is why I frowningly gave this extraordinarily interesting book the most skeptical reading I could.

System 2, in Kahneman’s scheme, is our slow, deliberate, analytical and consciously effortful mode of reasoning about the world. System 1, by contrast, is our fast, automatic, intuitive and largely unconscious mode. It is System 1 that detects hostility in a voice and effortlessly completes the phrase “bread and. . . . ” It is System 2 that swings into action when we have to fill out a tax form or park a car in a narrow space. (As Kahneman and others have found, there is an easy way to tell how engaged a person’s System 2 is during a task: just look into his or her eyes and note how dilated the pupils are.)

More generally, System 1 uses association and metaphor to produce a quick and dirty draft of reality, which System 2 draws on to arrive at explicit beliefs and reasoned choices. System 1 proposes, System 2 disposes. So System 2 would seem to be the boss, right? In principle, yes. But System 2, in addition to being more deliberate and rational, is also lazy. And it tires easily. (The vogue term for this is “ego depletion.”) Too often, instead of slowing things down and analyzing them, System 2 is content to accept the easy but unreliable story about the world that System 1 feeds to it. “Although System 2 believes itself to be where the action is,” Kahneman writes, “the automatic System 1 is the hero of this book.” System 2 is especially quiescent, it seems, when your mood is a happy one.

BYT-2BYT-4At this point, the skeptical reader might wonder how seriously to take all this talk of System 1 and System 2. Are they actually a pair of little agents in our head, each with its distinctive personality? Not really, says Kahneman. Rather, they are “useful fictions” — useful because they help explain the quirks of the human mind.

To see how, consider what Kahneman calls the “best-known and most controversial” of the experiments he and Tversky did together: “the Linda problem.” Participants in the experiment were told about an imaginary young woman named Linda, who is single, outspoken and very bright, and who, as a student, was deeply concerned with issues of discrimination and social justice. The participants were then asked which was more probable: (1) Linda is a bank teller. Or (2) Linda is a bank teller and is active in the feminist movement. The overwhelming response was that (2) was more probable; in other words, that given the background information furnished, “feminist bank teller” was more likely than “bank teller.” This is, of course, a blatant violation of the laws of probability. (Every feminist bank teller is a bank teller; adding a detail can only lower the probability.) Yet even among students in Stanford’s Graduate School of Business, who had extensive training in probability, 85 percent flunked the Linda problem. One student, informed that she had committed an elementary logical blunder, responded, “I thought you just asked for my opinion.”
What has gone wrong here? An easy question (how coherent is the narrative?) is substituted for a more difficult one (how probable is it?). And this, according to Kahneman, is the source of many of the biases that infect our thinking. System 1 jumps to an intuitive conclusion based on a “heuristic” — an easy but imperfect way of answering hard questions — and System 2 lazily endorses this heuristic answer without bothering to scrutinize whether it is logical.

Kahneman describes dozens of such experimentally demonstrated breakdowns in rationality — “base-rate neglect,” “availability cascade,” “the illusion of validity” and so on. The cumulative effect is to make the reader despair for human reason.

Are we really so hopeless? Think again of the Linda problem. Even the great evolutionary biologist Stephen Jay Gould was troubled by it. As an expert in probability he knew the right answer, yet he wrote that “a little homunculus in my head continues to jump up and down, shouting at me — ‘But she can’t just be a bank teller; read the description.’ ” It was Gould’s System 1, Kahneman assures us, that kept shouting the wrong answer at him. But perhaps something more subtle is going on. Our everyday conversation takes place against a rich background of unstated expectations — what linguists call “implicatures.” Such implicatures can seep into psychological experiments. Given the expectations that facilitate our conversation, it may have been quite reasonable for the participants in the experiment to take “Linda is a bank clerk” to imply that she was not in addition a feminist. If so, their answers weren’t really fallacious.

BYT-2BYT-4This might seem a minor point. But it applies to several of the biases that Kahneman and Tversky, along with other investigators, purport to have discovered in formal experiments. In more natural settings — when we are detecting cheaters rather than solving logic puzzles; when we are reasoning about things rather than symbols; when we are assessing raw numbers rather than percentages — people are far less likely to make the same errors. So, at least, much subsequent research suggests. Maybe we are not so irrational after all.

Some cognitive biases, of course, are flagrantly exhibited even in the most natural of settings. Take what Kahneman calls the “planning fallacy”: our tendency to overestimate benefits and underestimate costs, and hence foolishly to take on risky projects. In 2002, Americans remodeling their kitchens, for example, expected the job to cost $18,658 on average, but they ended up paying $38,769.

The planning fallacy is “only one of the manifestations of a pervasive optimistic bias,” Kahneman writes, which “may well be the most significant of the cognitive biases.” Now, in one sense, a bias toward optimism is obviously bad, since it generates false beliefs — like the belief that we are in control, and not the playthings of luck. But without this “illusion of control,” would we even be able to get out of bed in the morning? Optimists are more psychologically resilient, have stronger immune systems, and live longer on average than their more reality-based counterparts. Moreover, as Kahneman notes, exaggerated optimism serves to protect both individuals and organizations from the paralyzing effects of another bias, “loss aversion”: our tendency to fear losses more than we value gains. It was exaggerated optimism that John Maynard Keynes had in mind when he talked of the “animal spirits” that drive capitalism.

Even if we could rid ourselves of the biases and illusions identified in this book — and Kahneman, citing his own lack of progress in overcoming them, doubts that we can — it is by no means clear that this would make our lives go better. And that raises a fundamental question: What is the point of rationality? We are, after all, Darwinian survivors. Our everyday reasoning abilities have evolved to cope efficiently with a complex and dynamic environment. They are thus likely to be adaptive in this environment, even if they can be tripped up in the psychologist’s somewhat artificial experiments. Where do the norms of rationality come from, if they are not an idealization of the way humans actually reason in their ordinary lives? As a species, we can no more be pervasively biased in our judgments than we can be pervasively ungrammatical in our use of language — or so critics of research like Kahneman and Tversky’s contend.

BYT-2BYT-4Kahneman never grapples philosophically with the nature of rationality. He does, however, supply a fascinating account of what might be taken to be its goal: happiness. What does it mean to be happy? When Kahneman first took up this question, in the mid 1990s, most happiness research relied on asking people how satisfied they were with their life on the whole. But such retrospective assessments depend on memory, which is notoriously unreliable. What if, instead, a person’s actual experience of pleasure or pain could be sampled from moment to moment, and then summed up over time? Kahneman calls this “experienced” well-being, as opposed to the “remembered” well-being that researchers had relied upon. And he found that these two measures of happiness diverge in surprising ways. What makes the “experiencing self” happy is not the same as what makes the “remembering self” happy. In particular, the remembering self does not care about duration — how long a pleasant or unpleasant experience lasts. Rather, it retrospectively rates an experience by the peak level of pain or pleasure in the course of the experience, and by the way the experience ends.

These two quirks of remembered happiness — “duration neglect” and the “peak-end rule” — were strikingly illustrated in one of Kahneman’s more harrowing experiments. Two groups of patients were to undergo painful colonoscopies. The patients in Group A got the normal procedure. So did the patients in Group B, except — without their being told — a few extra minutes of mild discomfort were added after the end of the examination. Which group suffered more? Well, Group B endured all the pain that Group A did, and then some. But since the prolonging of Group B’s colonoscopies meant that the procedure ended less painfully, the patients in this group retrospectively minded it less. (In an earlier research paper though not in this book, Kahneman suggested that the extra discomfort Group B was subjected to in the experiment might be ethically justified if it increased their willingness to come back for a follow-up!)

As with colonoscopies, so too with life. It is the remembering self that calls the shots, not the experiencing self. Kahneman cites research showing, for example, that a college student’s decision whether or not to repeat a spring-break vacation is determined by the peak-end rule applied to the previous vacation, not by how fun (or miserable) it actually was moment by moment. The remembering self exercises a sort of “tyranny” over the voiceless experiencing self. “Odd as it may seem,” Kahneman writes, “I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.”

Kahneman’s conclusion, radical as it sounds, may not go far enough. There may be no experiencing self at all. Brain-scanning experiments by Rafael Malach and his colleagues at the Weizmann Institute in Israel, for instance, have shown that when subjects are absorbed in an experience, like watching the “The Good, the Bad, and the Ugly,” the parts of the brain associated with self-consciousness are not merely quiet, they’re actually shut down (“inhibited”) by the rest of the brain. The self seems simply to disappear. Then who exactly is enjoying the film? And why should such egoless pleasures enter into the decision calculus of the remembering self?

BYT-2BYT-4Clearly, much remains to be done in hedonic psychology. But Kahneman’s conceptual innovations have laid the foundation for many of the empirical findings he reports in this book: that while French mothers spend less time with their children than American mothers, they enjoy it more; that headaches are hedonically harder on the poor; that women who live alone seem to enjoy the same level of well-being as women who live with a mate; and that a household income of about $75,000 in high-cost areas of the country is sufficient to maximize happiness. Policy makers interested in lowering the misery index of society will find much to ponder here.

By the time I got to the end of “Thinking, Fast and Slow,” my skeptical frown had long since given way to a grin of intellectual satisfaction. Appraising the book by the peak-end rule, I overconfidently urge everyone to buy and read it. But for those who are merely interested in Kahneman’s takeaway on the Malcolm Gladwell question it is this: If you’ve had 10,000 hours of training in a predictable, rapid-feedback environment — chess, fire-fighting, anesthesiology —then blink. In all other cases, think.

Radical Uncertainty: Decision-making for an unknowable future

By John Kay & Mervyn King

“I prefer true but imperfect knowledge, even if it leaves much indetermined and unpredictable, to a pretence of exact knowledge that is likely to be false.”

Friedrich Von Hayek, Nobel Prize lecture, 1974

Preface

BYT-2Forty years ago, we wrote a well-received book, The British Tax System, describing the failures – intellectual and practical – of the tax system. We were neither academic scribblers inventing a tax system from scratch nor tax accountants engrossed in excruciating detail. Instead, as young academics we set out to look carefully at how the tax system actually worked in practice and then to design improvements based on a small number of carefully thought through principles. Forty years later, we discovered that we had independently come to the view that economics as a whole faced a similar challenge and was in need of a fresh look. This book is the result.

The British Tax System sold well and went into several editions. But then our careers went in different directions. John became the Director of the Institute for Fiscal studies, started a successful consulting company focusing on business economics, and was the first director of the Said Business School at Oxford University and for twenty years a columnist for the Financial Times. Mervyn became an academic in various universities in the UK and US before joining the Bank of England as Chief Economist and later Governor from 2003 to 2013.

During those forty years we saw at first hand the power of economics as a way of approaching practical problems, and also its limitations. As students and academics we pursued the traditional approach of trying to understand economic behaviour through the assumption that households, businesses and indeed governments take actions in order to optimise outcomes. We learnt to approach economic problems by asking what rational individuals were maximising. Businesses were maximising shareholder value, policy makers were trying to maximise social welfare, and households were maximising their happiness or ‘utility’. And if businesses were not maximising shareholder value, we inferred that they must be maximising something else – their growth, or the remuneration of their senior executives.

The limits on their ability to optimise were represented by constraints: the relationships between inputs and outputs in the case of businesses, the feasibility of different policies in the case of governments, and budget constraints in the case of households. This ‘optimising’ system of behaviour was well suited to the growing use of mathematical techniques in the social sciences. If the problems facing businesses, governments and families could be expressed in terms of well-defined models, then behaviour could be predicted by evaluating the ‘optimal’ solution to these problems.

Although much can be learnt by thinking in this way, our own practical experience was that none of these economic indicators were trying to maximise anything at all. This was not because they were stupid, although sometimes they were, nor because they were irrational, although sometimes they were. It was because an injunction to maximise shareholder value, or social welfare, or household utility, is not a coherent guide to action. Business people, policy-makers and families could not even imagine having the information needed to determine the actions that would maximise shareholder value, social welfare or household utility. Or to know whether they had succeeded in doing so after the event. Honest a capable executives and politicians, of which there are many, try instead to make incremental decisions which they think will improve their business, or make the world a better place. And happy households are where the family members work together to ensure that tomorrow is at least as good as today.

Most economists would readily acknowledge that no one actually engages in the kinds of calculation which are described in economic models. But since the work of Paul Samuelson, economists have relied on the claim that if people observed certain axioms which constituted ‘rationality’ they would – unconsciously – be optimising, rather as Moliere’s M. Jourdain has been talking prose for forty years without knowing it. And when this axiomatic approach is applied to consumer behaviour, as it was by Samuelson, the method is more fruitful than the sceptical observer might expect.

But we show in this book that the axiomatic approach to the definition of rationality comprehensively fails when applied to decisions made by businesses, governments or households about an uncertain future. And this failure is not because these economic actors are irrational, but because they are rational, and – mostly – do not pretend to knowledge they do not and could not have. Frequently they do not know what is going to happen and cannot successfully describe the range of things that might happen, far less know the relative likelihood of a variety of different possible events.

The financial crisis of 2007-08 brought home the intellectual failures of optimising models to capture the disruptive behaviour that results from confronting an unknowable future. But this is not another book about that financial crisis, or even another book about economics, although we believe that the implications for the study of economics are considerable. It is a book about how real people make choices in a radically uncertain world, in which probabilities cannot meaningfully be attached to alternative futures.

As we wrote this book, and discussed our ideas with friends and colleagues, we encountered very different reactions from general readers, on the one hand, and specialists, on the other. Most people find the concept of radical uncertainty natural and indeed obvious. For them, the challenge is not to accept the existence of radical uncertainty but to find ways of coping with it. We hope they will find the answers to that challenge in the chapters that follow. Many people who have been trained in economics, statistics or decision theory, however, find it difficult to accept the centrality of radical uncertainty. And to these we need to add some who work in computer science and artificial intelligence –or who have simply read enough about these things to be caught up in the wave of popular enthusiasm for the style of reasoning at which computers excel.

In trying to persuade those two different audiences of the importance of radical uncertainty, the risk is that one thinks we are flogging a dead horse; the other that weare flogging the winner of the Kentucky derby by decrying a set of techniques which has trans formed our thinking n economics, statistics, decision-making and artificial intelligence. We hope that the general readers will nevertheless enjoy the spectacle of the flogging and that specialists will feel at least some of the sting of the lash.

 
 

Thinking, Fast and Slow | Daniel Kahneman | Talks at Google
Daniel Kahneman: Thinking Fast vs. Thinking Slow | Inc. Magazine

10 Questions for Nobel Laureate Daniel Kahneman
Radical Uncertainty: How do we make good decisions in a radically uncertain world?

John Kay and Mervyn King on Radical Uncertainty 8/3/20
Radical Uncertainty: book launch with Mervyn King and John Kay


Leave a Comment

Your email address will not be published. Required fields are marked *

0 thoughts on “Thinking, Fast and Slow and Radical Uncertainty

  • Reply
    Socks
    I've come across that these days, more and more people are being attracted to cameras and the field of picture taking. However, really being a photographer, you should first commit so much time deciding the exact model of digital camera to buy plus moving store to store just so you could possibly buy the most economical camera of the brand you have decided to pick. But it would not end generally there. You also have to contemplate whether you can purchase a digital dslr camera extended warranty. Thanks a lot for the good points I accumulated from your web site.
  • Reply
    gralion torile
    I’ve been exploring for a bit for any high quality articles or weblog posts in this kind of space . Exploring in Yahoo I at last stumbled upon this website. Studying this information So i’m glad to exhibit that I have a very good uncanny feeling I came upon exactly what I needed. I such a lot without a doubt will make sure to don’t overlook this website and provides it a look regularly.
  • Reply
    zoritoler imol
    I do like the manner in which you have framed this particular issue and it does provide us a lot of fodder for thought. Nevertheless, coming from what I have experienced, I simply just wish as the actual opinions stack on that people today stay on issue and don't start on a tirade involving the news du jour. Yet, thank you for this superb piece and although I can not really go along with it in totality, I regard the standpoint.
  • Reply
    graliontorile
    Hiya, I'm really glad I've found this info. Nowadays bloggers publish just about gossips and net and this is actually annoying. A good blog with interesting content, that is what I need. Thank you for keeping this website, I'll be visiting it. Do you do newsletters? Cant find it.
  • Reply
    zortilonrel
    Hello, i feel that i saw you visited my website thus i came to “return the favor”.I am trying to find things to enhance my site!I suppose its ok to make use of a few of your ideas!!