One area of commonality is that Kahneman and Gigerenzer (and I assume, my imagined “classical economist”) share a belief that a better understanding of statistical thinking is important in making better decisions and evaluating information. But, Gigerenzer is much more hopeful that this can actually be communicated/learned, and provides more accessible tools for the statistically uneducated layperson (like myself).
In this article, we summarized a vision of human nature based on an adaptive toolbox of heuristics rather than on traits, attitudes, preferences, and similar internal explanations. We discussed the progress made in developing a science of heuristics, beginning with the discovery of less-is-more effects that contradict the prevailing explanation in terms of accuracy-effort trade-offs. Instead, we argue that the answer to the question â€œWhy heuristics? â€ lies in their ecological rationality, that is, in the environmental structures to which a given heuristic is adapted.
peddling expert witnesses, â€œHeuristics and Biasesâ€, (Gilovich, Griffin and Kahneman) stands out as one I found especially enlightening. Recently I suggested it to my eldest son, pulled it off the shelf and flipped through to the chapter on anchoring (a particularly weird heuristic) to use as an example of the perils of automated decision-making. Thank you, Keith, for that Rubin et al article.
With a higher risk (e.g., 10 percent), some research hypotheses would have been supported. Nonetheless, this study contributes to credibility research concerning Wikipedia for the following aspects. First, the study revealed that students, indeed, used various peripheral cues of Wikipedia when they were uncertain about the believability of Wikipedia articles. Second, this study first attempted to understand non-verification behavior and offered some new insights into understanding non-verification information behavior by employing the theory of bounded rationality, despite only partially confirming the theory.
A third class of heuristics, Fast-And-Frugal trees, are designed for categorization and are used for instance in emergency units to predict heart attacks, and model bail decisions made by magistrates in London courts. In such cases, the risks are not knowable and professionals hence face uncertainty. To better understand the logic of Fast-And-Frugal trees and other heuristics, Gigerenzer and his colleagues use the strategy of mapping its concepts onto those of well-understood optimization theories, such as signal-detection theory.
In the end, I feel a little more “risk savvy” after reading the book. Gigerenzer’s book serves well in stretching the understanding of risk and probability. According to the common view, we, humans, are probability-blind and predictably irrational . The author provides some useful tools for dealing with risk and uncertainty, arguing that it is perfectly possible to remove our seemingly hardwired cognitive biases. The three important angles of probability are discussed in the book.
More worryingly, defensive decision-making may kill thousands of people every year. As Gerd Gigerenzer explains in his new book Risk Savvy (already my candidate for the best book of the year), a vast proportion of medical tests, interventions and referrals are unnecessary. They make no sense statistically, but the doctor is instinctively afraid he or she might be sued for not performing them. Heuristics are efficient cognitive processes that ignore information. In contrast to the widely held view that less processing reduces accuracy, the study of heuristics shows that less information, computation, and time can in fact improve accuracy.
I particularly loved the ending of the book where he gently admonishes the concept of nudging (whose most prominent faces are Richard Thaler and Cass Sunstein) by writing, “As a general policy, coercing and nudging people like a herd of sheep instead of making them competent is not a promising vision of democracy. ” This is a book that should be revered and shared by self-directed learners who savor the opportunity to become more informed and appreciate acquiring knowledge that makes life just a little bit easier. Plus, if you are a fan of schadenfreude, you can see financiers, physicians and lawyers taken down a few notches as many of them are naive about how to interpret risk. This book is hard to rate objectively.
Formal models of heuristics
Although variance is likely to be the dominant source of error when observations are sparse, it is nevertheless controllable. This analysis has important implications for the possibility of general-purpose models. To control variance, one must abandon the ideal of general-purpose inductive inference and instead, consider to one degree or another, specialisation (34). Put simply, the bias-variance dilemma shows formally why a mind can be better off with an adaptive toolbox of biased, specialised heuristics. A single, generalpurpose tool with many adjustable parameters is likely to be unstable and incur greater prediction error as a result of high variance.