The problem of bias is of critical importance, given that judgment pervades human experience and is crucial to decision making: "Should I accept this job?" "Should we develop a new product?" "For whom should I vote?" "Is the defendant guilty?" Decision-making errors, obviously, can be extremely costly at the personal, professional, and societal levels. Not surprisingly, there has been a fair amount of effort invested in looking for ways to reduce bias.
Unfortunately, there does not appear to be an easy fix. In 1982, decision scientist Baruch Fischhoff reviewed 4 straightforward strategies for reducing bias: (1) warning subjects about the potential for bias, (2) describing the likely direction of bias, (3) illustrating biases to the subject, and (4) providing extended training, feedback, coaching, and other interventions. Fischhoff concluded that the first 3 strategies yielded only limited success, and even intensive, personalized feedback and training produced only moderate, short-term improvements in decision making . In the succeeding 25 years since Fischhoff's study, much additional research has been conducted, but the basic conclusion remains the same—simple methods for addressing bias have limited applicability and produce limited success. On the other hand, as described below, more involved methods, such as replacing intuitive decision making with analysis, can be effective.
Common Methods Reducing Bias
One continuing line of research involves investigating whether biases can be reduced by encouraging subjects to put more effort into forming judgments. Asking students to "show their work," for example, has been shown to slightly increase the chances of obtaining a correct answer (it is more helpful for pinpointing where lack of understanding may occur). In general, however, the limited success of such techniques suggests that most biases are not very sensitive to the amount of effort one applies.
Encouraging people to take an outsider's perspective has been shown to somewhat reduce the tendency for overconfidence to bias estimates ("What do you think someone not directly involved would say?"). The idea is to reduce personal biases by removing oneself mentally from the specific situation. Some studies show that the technique can improve estimates of the time it would take to complete a task and the odds of success [19, 20].
Note taking may encourage more thoughtful responses
Increasing accountability for decisions has been shown to lead to better choices . Likewise, enhancing accountability for opinions that people express can help in some circumstances. For example, it has been suggested that, when obtaining critical information from someone, it may be useful to take notes (or to appear to take notes). If people believe you may quote them to others, they may be more careful in what they say. Similarly, to support project-selection decisions, it is useful to have project proponents document precisely why they believe their proposed projects should be conducted. Going on record encourages managers to be more careful in their logic, and the fear of being proved wrong helps counter over-optimism.
Training (in biases) has been shown to help people address some biases. However, as observed by Fischhoff above, the effect is generally short lived and does not produce an overwhelming improvement in performance. One problem is that it is often hard to get people to appreciate that bias is something that affects them personally, not just others. Thus, in situations where time permits, it helps to demonstrate biases. For example, if you are obtaining judgments from a group of individuals and are concerned about overconfidence bias, don't just tell them about the 2/50 rule (described above). Instead, run them through an exercise that demonstrates that the rule applies to them.
Not surprisingly, improving cause-effect understanding of the relevant situation and processes has been shown to improve the quality of estimates and decisions. For example, studies show that when people are encouraged to look for common principles underlying seemingly unrelated tasks they are able to better create solutions for different tasks that rely on the same underlying principles .
There is evidence that groups reach better decisions when alternatives are evaluated simultaneously as opposed to having each alternative evaluated sequentially and potentially rejected. The presumed explanation is that people initially react emotionally when considering an alternative; they think mostly about how it will affect them personally. If alternatives are evaluated simultaneously side-by-side, group members are less susceptible to this reaction .
Strategies for Reducing Specific Biases
The usual strategy for reducing a specific bias is to address the mental processing error that is believed to produce that bias. For example, in one study, researchers assumed that hindsight bias, the tendency to exaggerate the extent to which one could have anticipated a particular outcome, results from the difficulty people have in appreciating the limited information available at the time and the restricted inferences that could be made from that information. By providing evidence that argued against the actual outcome, they found that their subjects could be made more resistant to the bias . Similarly, it has been hypothesized that people's tendency to over-claim credit for a group accomplishment is due in part to the tendency to be more aware of one's own efforts. Researchers showed that when people are asked to estimate not only their own contributions but also those of others, they attribute less credit to themselves .
Figure 4 (derived from Wilson and Brekke ) illustrates a view for how judgmental biases are created and suggests a general strategy for reducing biases. According to the figure, the first step is to create awareness of the flawed mental processes involved. The subject must be motivated to correct the bias, and understand the direction and magnitude of the errors produced. Finally, the bias must be removed or countered. The technique used to mitigate the bias of concern is often the application of a countering bias, for example, countering overconfidence by encouraging subjects to describe (and therefore anchor on) extreme possibilities. Many of the recommendations provided earlier in this section under "Advice" are based on this logic.
Figure 4: General strategy for debiasing.
Hundreds of decision aids have been developed and recommended to reduce the distortions in decisions caused by errors and biases. I developed the list of sample aids below in the context of a chapter for a book on aids for environmental decisions . Links to definitions are provided for terms that tend to be used in project portfolio management. As indicated, there are at least 5 categories of aids: (1) checklists for promoting a quality decision process, (2) thinking aids intended mainly to improve perspective or create insights, (3) models and optimization methods for recommending choices, (4) aids for promoting group consensus, and (5) voting methods. As an example of the first category, Figure 5 is a checklist aid for scoring the decision-making process relative to the components of a quality decision-making process.
Figure 5: Checklist diagram for evaluating deficiencies in the decision-making process .
Notice that a common characteristic among decision aids is that they add structure to the decision making process, forcing decision makers to rely less on intuition and emotion and more on deliberate thinking. Models and analysis, in my opinion, represent the most effective way to address errors and biases in decision making. Essentially, the concept is to replace flawed intuitive reasoning with a formal, analytical process.
Much evidence has accumulated indicating the effectiveness of models and analysis. For example, in situations where past data are available on the inputs and results of a decision making process, models can be created using regression analysis. Such models are being used to help graduate schools to decide which students to admit, clinical psychologists to diagnose neuroses and psychoses, and credit card companies to decide whether to accept loan applications. For a very wide range of subject areas, researchers have found that such models produce better and more reliable decisions than those made by people, including the experts who made the original decisions from which the models were derived .
Even very simple models have been shown to improve estimates and, therefore, encourage better decisions. Ask people to estimate how tall an eight-story building is, and you will likely get very poor estimates. But, if they envision each floor as being about 50% higher than a tall person, say 10 feet, and then multiply by the number of stories, the result of 80 feet is fairly accurate. A model doesn't need to be very precise or even require very accurate inputs. To illustrate this, in training classes I've asked managers to estimate quantities about which they know very little, for example, "How many chicken eggs are produced in the U.S. each year?" The estimates are not very accurate. Then, attendees break into teams to create a simple model the output of which is the number in question. For example, a team might calculate annual egg production as the number of people in the country times the average number of eggs consumed per week times the number of weeks in the year. Invariably, the teams produce much more accurate estimates using their models.
Another argument for using models comes from research that shows that experts appear to be less subject to biases when addressing issues that are entirely within their areas of expertise. Models break the problem of evaluating alternatives down into individual pieces such that different experts with specialized knowledge can be selected to focus on each piece. Thus, models dissect a complex problem in a way that makes the required judgments less likely to be biased.
A Strategy for Decision Making
Approaches to making good decisions differ greatly in terms of the amount of time and effort required. Intuitive decisions can be fast, automatic and effortless, while analysis is slower and requires considerably more effort. Figure 6 illustrates that the appropriate approach to decision making depends on the significance of the decision, the challenges involved, and the time available for analysis. It is useful to develop a quiver of decision-making strategies, and to select the approach that makes most sense for the given circumstance.
Figure 6: The appropriate decision aid and decision-making approach depends on the nature of the decision.
The best protection from bias comes from training, using formal techniques for obtaining important judgments, utilizing well-founded decision aids, and instituting rigorous decision processes that document the judgments and assumptions upon which choices are based. The subsequent parts of this paper describe such a process specific to decision making for project portfolio management. As stated by Ken Keyes, "To be successful in the business world we need to check our bright ideas against the territory. Our enthusiasm must be restrained long enough for us to analyze our ideas critically" .
References for Part 1