Lee Merkhofer Consulting Priority Systems
Choosing the Wrong Portfolio of Projects

Learn when and how to use each of the four main strategies for debiasing decisions.

Debiasing

Bias switch

Suppose you are a project portfolio manager interviewing a job candidate for your team. On paper, this is the most qualified person you've seen. She knows your industry and has received excellent reviews from the managers with whom she would work. Her responses to your interview questions are excellent. She is analytical, has great project management experience, is a fantastic communicator, and her social skills are flawless. But something doesn't feel right. You can't put your finger on it—you just have a nagging sense something is wrong. Do you hire her?

This scenario is similar to one posed in managerial decision-making training classes as reported in a Harvard Business Review article by Professors Jack Soll, Katherine Milkman, and John Payne [1]. In answer to their question, they find that most executives say they would trust their intuition and send the applicant on her way. The problem is, "How do you know your quick, intuitive response isn't just a knee-jerk reaction to some irrational cognitive bias?" You may be passing up the best candidate you'll ever interview.

What is Debiasing and Why Should We Care?

Bias switch

It would be nice if there was some way to turn off our biases when we need to make important decisions, such as deciding whether or not to hire someone for your organization or choosing the projects to include in the organization's project portfolio. Debiasing is the application of methods to reduce and remove biases that introduce errors into important judgments and decisions [2].

The previous pages have described the many ways we can be biased away from making good decisions. Errors caused by cognitive biases, such as supporting evidence bias, anchoring, overconfidence, sunk cost, and availability, as well as other biases, can be extremely costly at the personal, business, and societal levels.

Biases should be of particular concern to businesses given the extent to which business success depends on avoiding decision errors. Specific ways that biases harm business decision making include (1) bounded awareness leading to strategic plans that ignore critical likelihoods, such as predictable reactions by competitors, (2) groupthink that distorts risk-taking and inhibits innovation, (3) poor organizational culture that tolerates habitual misrepresentation, hampers engagement and retards information sharing, (4) sunk cost reasoning that leads to inability to kill failing projects that are over budget and over time, and (5) autocratic, cult-status, executives that promote large-scale pet projects that end up consuming enormous amounts of capital with little or no real business benefit. The global management consulting firm McKinsey & Company surveyed 770 corporate board members to determine the characteristics of high-performing boards. The "biggest aspiration" was "reducing decision biases" [3].

Debiasing as a way to improve project decision making is gaining increasing executive attention, and organizations that are learning how to do it are obtaining better performing project portfolios.

Debiasing Theory

Figure 6 (derived from Wilson and Brekke [4]) illustrates a view of how biases are created and suggests why they might be difficult to remove.


Debiasing steps

Figure 6:   General strategy for debiasing.

According to the figure, in order to debias their decisions people need to first be aware of the flawed mental processes they may be using that lead to bias. However, even when there is awareness of a bias, people won't try to correct it unless motivated to do so. Debiasing generally requires some understanding of the bias that may be affecting them and the direction and magnitude of the errors that the bias will produce. Finally, there must exist some effective strategy for removing the bias.

Richard Thaler

Research on ways to mitigate biases has for decades been overshadowed by the hundreds of studies demonstrating biases (perhaps it is easier and viewed more noteworthy to show that something is broken than to show how to fix it). However, recent research has yielded an array of debiasing strategies that can improve judgments and decisions. As if in recognition of the growing appreciation of the importance of debiasing, the 2017 Nobel Prize in Economics was awarded to Richard Thaler, one of the leading researchers for the topic of debiasing.

Debiasing Approaches

For debiasing important, complex decisions such as choosing which projects to select, the most successful debiasing strategies involve promoting and facilitating what psychologist Daniel Kahneman terms System 2 thinking—applying cognitive effort, rationalizing, slowing down, using tools and aids, and bringing more information and facts to the decision-making process [5]. There are five general approaches for accomplishing this: incentives, nudging, training, policy fixes, and tools.

Incentives

An obvious approach to debiasing is to increase motivation to perform well. The underlying assumption is that individuals will expend more effort on reflection and calculation—that is, that logical System 2 thinking will override emotional System 1 thinking—if the stakes for making good decisions are high enough [6]. Incentives have shown some success. For example, offering payments and discounts aimed at rewarding healthy behavior has been shown to improve diet [7], exercise [8], weight loss [9], medication adherence [10], and smoking cessation [11]. In one study, during a period in which the price of fresh fruit was cut in half in school cafeterias, sales of fresh fruit quadrupled [12].

A related approach is to make people more accountable for their decisions. The logic of accountability is similar to the logic of incentives except that it depends on the motivational influence of social costs and benefits (such as making a favorable impression and avoiding embarrassment) [13]. Asking students to "show your work," for example, has been shown to slightly reduce the chances of students making errors on their homework assignments (it is more helpful, of course, for pinpointing where lack of understanding may occur) [14].

Note taking

Note taking

Likewise, enhancing accountability can help limit the expression of overly extreme positions. I was trained to take notes (or appear to take notes) whenever obtaining information from someone in support of decision making. If people believe you may quote them to others, they may be more attentive to what they say. Similarly, to encourage more accurate project assessments, it is wise to have the project proponents document precisely why they believe their proposed project should be conducted. Going on record encourages managers to be more careful in their logic, and the fear of being proved wrong helps counter over-optimism. It is said that the principal mechanism by which accountability improves decision making is pre-emptive self-criticism [6]. Knowing that you may need to justify your decisions to others, you will be more mindful of flaws in your arguments.

A critical assumption of the incentives approach is that people know how decisions ought to be reached and will use these superior methods when they feel it is important to do so. Thus, for incentives to improve decision making, decision makers must possess effective strategies that they either fail to apply or apply with insufficient effort when incentives are absent [15]. If people don't have the ability to apply better decision-making approaches, increasing incentives can simply make them work harder at applying flawed heuristics.

Studies show that increasing incentives is not always effective for improving decision quality. Some researchers [16] argue that the automatic nature of heuristics makes them largely unresponsive to incentives, and experiments have tended to show this to be the case for most such biases, including overconfidence, supporting evidence bias, and framing bias. The exception is anchoring, incentives have been shown to reduce the influence of anchors in some situations [17].

Incentives and accountability have long been used by businesses to encourage productive employee behavior. The possibility of promotion or expanded responsibility can be powerful motivators. Incentive compensation, such as bonuses and stock options, likewise motivate employees. To effectively drive unbiased decisions in the interest of the organization, incentives need to be easy to understand, viewed as fair, and aligned with company objectives. Incentives and accountability can help create better project selection decisions, provided that managers have the tools and ability to assess project value. However, because of project performance uncertainty, project outcomes aren't always highly correlated with the quality of project decisions. Accordingly, managers need to be rewarded based on the quality of their decisions and not so much on outcomes alone.

Nudging

Nudge

Nudging, referred to in academic literature as optimizing choice architecture, involves framing decisions and providing helpful information in such a way as to diminish or eliminate people's tendency to allow biases to generate poor decisions. The idea is to make good choices easy choices. The concept of nudging was made popular by the book Nudge coauthored by Thaler [18]. A key characteristic of nudge strategies is that they are meant to change people's behavior without restricting their options or significantly changing their incentives.

Nudges can take many forms, including manipulating the structure of decisions, presenting choice options in particular ways, providing information, and establishing default options. Nudging is used increasingly by government agencies to promote certain social outcomes [e.g., 19, 20, 21], with applications ranging from increasing employee retirement savings to reducing the amounts of soda drinks and junk food that people consume [22]. Some governments around the world have formed "nudge units"—teams of behavioral scientists tasked with designing behavioral interventions that encourage desired actions from citizens. [23]

Presenting decision options and ensuring that relevant information is available and understood is the most obvious way to nudge. However, simply giving people more information and choices is not always helpful, particularly when it makes decisions seem more complicated [24]. For example, providing calorie information does not necessarily lead people to make healthier food choices. Also, providing more information can in some circumstances promote less desired behavior. For example, there is evidence that smokers actually believe the health risks of smoking are higher than they really are, so debiasing smokers may actually increase smoking risks [25].

Nutrion traffic lights

Changing how information is presented can make choices easier to understand and good options easier to identify. For example, reporting vehicle fuel consumption in miles per gallon (MPG) can lead to the misconception that a large advantage in MPG for a fuel-efficient car (e.g., getting 50 MPG rather than 40 MPG from a light town car) is better than getting a small advantage in MPG for a fuel inefficient car (e.g., getting 20 MPG rather than 15 MPG from a large SUV). Reporting fuel efficiency in terms of gallons per 100 miles eliminates this misconception [26]. As another example of how a simple change in measurement metric can promote more desired choices, labeling meat 25% fat makes people more sensitive to fat content than labeling it 75% lean [27]. Also, formatting information in simple ways that facilitate understanding, such as displaying nutritional value using a green-yellow-red "traffic light" association, can persuade shoppers to purchase healthier foods [28]. Traffic light color coding quickly communicates whether important metrics are positive, neutral, or negative, and many project portfolio management tools use this method for displaying project characteristics.

A cheap and effective nudge is to strategically select the default option to which people will be assigned if they do not actively choose an alternative option. As described earlier with an insurance example, people are more likely to choose an option if it is a default from which they must opt-out than if it is an option that they must actively choose. For example, the government of Great Britain mandated employers to create “automatic enrollment” into a pension scheme—which meant that a portion of every employee’s compensation would be deducted monthly and added to their pension fund—unless they requested to be exempted from this scheme. Making contributions the default dramatically increased savings. Membership in private sector pension schemes grew from 2.7 million in 2012 to 7.7 million in 2016 [29].

Despite the above successes for better communicating information for the purpose of influencing customer behavior, there do not appear to be many effective applications of nudging to business decisions. Critics point out that although a particular nudging strategy may reduce decision bias in one context, it is typically hard to generalize that strategy to find effective nudges for other contexts. Nudging does not address the underlying structural causes of biased decisions. Also, there is a concern that manipulating people's choices using behavioral science is unethical, or at least should not be the focus of government activity. Critics argue that nudging is a Big Brother violation of citizens’ autonomy, encroaching upon freedom of choice [24].

Training

Baruch Fischhoff

An obvious approach to reducing biases is to educate people about biases. If people understand their biases, perhaps they can avoid them. Early research on training focused on this route. In 1982, decision scientist Baruch Fischhoff reviewed four strategies for reducing bias: (1) warning subjects about the potential for bias, (2) describing the likely direction of bias, (3) illustrating biases to the subject, and (4) providing training, feedback, and coaching about biases. Fischhoff concluded that the first three strategies yielded only modest success, and even intensive, personalized feedback and training produced only moderate, short-term improvements in decision making [30].

One reason that educating people about biases may have little effect is that it is generally hard to get people to appreciate that bias is something that affects them personally, not just others [4]. Thus, in situations where time permits, it helps to demonstrate biases. For example, if you are obtaining estimates from a group of individuals and are concerned about overconfidence bias, don't just tell them about the 2/50 rule (described previously). Instead, run them through an exercise that demonstrates that the rule applies to them.

Improving cause-effect understanding of the relevant situation and processes has been shown to improve the quality of estimates and decisions. For example, studies show that when people are encouraged to look for common principles underlying seemingly unrelated tasks, they are able to obtain better solutions to other tasks that make use of the same underlying principles [31]. Experts can be trained to make accurate decisions when those decisions require recognizing patterns and applying appropriate responses within their domains of expertise. This type of training has been shown effective, for example, for improving expert's judgments and decisions in areas of firefighting, chess, and weather forecasting. Weather forecasters, in particular, are able to predict rain with high accuracy. However, when asked to provide confidence ranges for answers to trivia questions derived from an Almanac, they are just as overconfident as other people [32].

Though studies have consistently shown the power and pervasiveness of biases, researchers have often observed that there are some individuals who are less susceptible to a broad range of biases than the general population. Apparently, these individuals are more likely and more able to apply System 2 thinking. Inclination to apply System 2 thinking appears to be correlated with general aptitude [33]. For example, a study investigating 28 cognitive biases found that, for roughly half the biases, performance correlated with the subject's general intelligence and education level. Furthermore, studies have found that people with educational backgrounds in areas of applied mathematics, such as economics, probability, and statistics, tend to apply more effective cognitive strategies and exhibit less bias [34]. Teaching people statistical reasoning and decision-making rules of which they are unaware has been shown to significantly reduce some biases [32].

The Power of Compounding

Although knowledge of mathematical logic generally makes one less vulnerable to biases, in most situations, of course, providing indepth training in mathematical decision making is not feasible. A promising alternative useful for improving some decision making tasks is to teach rules of thumb applicable to specific decisions prone to bias. Teaching people simple rules for countering specific biases can be done quickly and has been shown to be effective. For example, people's reluctance to save for retirement has been attributed to present bias. Appreciation for the power of compounding can help to counter this bias. The "rule of 72" states that an investment that grows at X percent per year will double roughly every 72/X years. Teaching people the rule of 72 has been shown to motivate increased retirement saving [6]. Even more effective, as demonstrated by another study, was showing people graphical displays similar to that on the left illustrating the long-term benefits of saving now, and the penalties associated with delay [35]. A creative study demonstrated that showing subjects "morphed" images intended to resemble their appearance upon retirement motivated them to save money for the future rather than spending it in the present [36].

Some debiasing methods, as I've described previously, involve rebiasing, countering one bias with another one. For example, overconfidence can be countered by encouraging subjects to describe (and therefore anchor on) extreme possibilities. The simple advice to "Consider the opposite" has been shown effective at reducing overconfidence, hindsight biases, and anchoring effects [37]. Duke University business professor Richard Larrick reports that Wall Street brokers remind each other "Not to confuse brains with a bull market", that Toyota encourages their employees to analyze problems by asking the question "Why?" five times, and that examiners at the Federal Reserve Bank of New York use the acronym CAMEL (capital adequacy, asset quality, management, earnings, and liability) to summarize the criteria they should apply when evaluating loans [6].

Boston University professor Carey Morewedge and colleagues have been able to show that advanced, one-shot training methods can be effective at achieving long-term debiasing. In a series of experiments aimed at exploring the susceptibility of U.S. intelligence analysts to a number of important and common biases, Morewedge tested the effectiveness of training videos combined with "serious," multi-level, interactive computer games. At the end of each level of the game, participants received personalized feedback about how biased they were during gameplay. They were then given a chance to practice and were taught strategies to reduce their susceptibility to each bias. The results showed that biases were reduced by more than 30% immediately and by more than 20% as long as three months later. A commercial version of the games is reportedly in production [38].

Process Fixes

Process fixes are changes to organizational policy intended to reduce the likelihood of decision errors and biases important to organizational success. As an example, rankism, a decision heuristic wherein persons of higher "rank" assert authority over persons of lower rank, has been identified as a contributor to airline accidents [39]: Accidents have occurred when pilots have become preoccupied with a task and failed to respond to legitimate concerns expressed by members of the crew. Major airline companies have, therefore, instituted a policy change called crew resource management that includes procedures intended to enable crew members to communicate more effectively and forcefully to the pilot when they have a concern [40].

A common process fix intended to combat decision errors and bias is the independent review board. The idea of independent review is that advice on decisions be provided by a qualified outside body that is organizationally and financially independent of the primary decision-making body. Communities of practice, advocated by systems engineering consultant Scott Jackson, is a generalization of the independent review board with the basic feature that decision makers authorize groups to provide advice to the decision makers [41]. The ultimate goal in this case is learning so that the decision-maker becomes wiser about the relevant issues in question.

Conducting a premortem, as advocated by Kahneman [5], is a process fix. It calls for decision-makers to surround themselves with trusted experts in advance of major decisions, such as the decision to launch a space vehicle. According to Kahneman the primary job of the experts is to present negative arguments against the preferred choice, for example, why the decision-maker should not authorize the space ship launch at this point in time.

According to Jackson, there are two main limitations of process fixes. First, before a process fix can be implemented, the organization has to accept that the required change in policy is a good thing. The second problem is enforcement; how willing is the organization to enforce the new policies it will need to adopt? For a process fix to be effective, the organization in question must have an effective policy for enforcing adherence to its new policy. [42].

Debiasing Tools

The simple "tool" most commonly used by organizations for improving decisions is to rely on groups rather than individuals to make important choices. Despite the dangers of groupthink, there are reasons that group decision making might be beneficial. First, group members can provide a check on each other's errors and misunderstandings. Second, synergies can emerge related to the sharing of information when people with complementary expertise interact. Third, groups provide more diverse perspectives, which, for example, may lead to new and creative solutions that can't be provided by any single individual [6]. On tasks that require quantitative assessments, such as forecasting or estimating, simply averaging individual estimates typically produces improved accuracy [43].

Hundreds of tools intended to aid decisions have been developed and most, when properly applied, can be argued to reduce distortions caused by errors and biases. I developed the list of sample aids below in the context of a chapter for a book on aids for environmental decision making [44]. Links to my glossary's definitions are provided for terms that tend to be used in the context of project portfolio management. As indicated, there are at least 5 categories of aids: (1) checklists for promoting a quality decision process, (2) thinking aids intended mainly to improve perspective or create insights, (3) models and optimization methods for recommending choices, (4) aids for promoting group consensus, and (5) voting methods. As an example of the first category, Figure 7 is a checklist aid for scoring the decision-making process relative to the components of a quality decision-making process.


Sample Decision Aids


Decision checklist

Figure 7:   Checklist diagram for evaluating deficiencies in the decision-making process [46].



Notice that a common characteristic among decision aids is that they add structure to the decision-making process, forcing decision makers to rely less on System 1 intuition and emotion and more on System 2 deliberate thinking. Models and analysis represent a very effective way to address errors and biases in decision making. Essentially, the concept is to replace flawed intuitive reasoning with a formal, analytical process.

An example of the ability of a decision-aiding model to improve decision making is provided by the book Moneyball, where author Michael Lewis describes how models revolutionized the market for professional baseball players. For many years, team managers picked players based on the unaided judgments of baseball experts without much use of relevant predictors of player performance, such as on-base percentage and ability to avoid strikeouts. The teams that were first to use simple models to aid player selection acquired a major performance advantage [47].

Using models and analytic processes can be very effective at reducing the impact of biases. For example, in situations where past data are available on the inputs and results of a decision-making process, models can be created using simple statistical methods such as regression analysis. Such models are being used to help graduate schools to decide which students to admit, clinical psychologists to diagnose neuroses and psychoses, and credit card companies to decide whether to accept loan applications. In a very wide range of subject areas, researchers have found that models produce better and more reliable decisions than those made by people, including the experts who made the original decisions from which the models were derived [48].

Even very simple models have been shown to improve estimates and, therefore, encourage better decisions. Ask people to estimate how tall an eight-story building is, and you will likely get poor estimates. However, if they envision each floor as being about 50% higher than a tall person, say 10 feet, and then multiply by the number of stories, the result of 80 feet is fairly accurate. A model doesn't need to be very precise or even require very accurate inputs. To illustrate this, in training classes I've asked managers to estimate quantities about which they know very little, for example, "How many chicken eggs are produced in the U.S. each year?" The estimates are not very accurate. Then, attendees break into teams to create a simple model the output of which is the number in question. For example, a team might calculate annual egg production as the number of people in the country times the average number of eggs consumed per week times the number of weeks in the year. Invariably, the teams produce much more accurate estimates using their models.

Decision analysis (DA), which is described in subsequent sections and involves constructing decision models, can an effective tool for countering biases and improving decision quality [6]. To construct and obtain the inputs for a decision model, the decision analyst conducts a series of steps that require interacting with a decision maker (to obtain preference judgments) and "experts" designated by the decision maker (to provide technical judgments). Though the resulting model is quantitative and represented in computer code and may be complex (it is as simple as it can be), critical early steps are qualitative and involve identifying and structuring objectives, generating alternatives, and creating influence diagrams [49, 50, 51].

Decision models can improve decision making in several ways [6]:

  1. A quality decision model ensures the use of normative algorithms (e.g., multi-attribute utility analysis, Bayes rule, probability theory) that are otherwise hard for individuals to properly apply at an intuitive level.
  2. The decision model can hide complex mathematical algorithms that might otherwise be intimidating to decision makers, thereby making decision analytic tools more palatable.
  3. The decision model can be used to explore and show the results of sensitivity analyses (e.g., on probabilities, estimates, and weights), thereby making the model results more convincing.

Another argument for decision models comes from research that shows that experts appear to be less subject to biases when addressing issues that are entirely within their areas of expertise. Models break the problem of evaluating alternatives down into individual pieces such that different experts with specialized knowledge can be selected to focus on each piece. Thus, models dissect a complex problem in a way that makes the required judgments less likely to be biased.

Models and analysis, in my opinion, represent the most effective way to address errors and biases for project selection decisions. Essentially, the concept is to replace flawed intuitive reasoning with a formal, analytical process. The opportunity to build a decision model provides the best of System 1 and System 2. Slow, creative System 1 thinking can be used to construct the model. Deliberate System 2 thinking can be used to test and validate the model. Once validated, the model provides quick information and insights demanded by a System 1 decision-making environment.

A Strategy for Avoiding Decision Errors and Bias

Approaches to making good decisions differ greatly in terms of the amount of time and effort required. Intuitive decisions can be fast, automatic and effortless, while analysis is slower and requires considerably more effort. Figure 8 illustrates that the appropriate approach to decision making depends on the significance of the decision, the challenges involved, and the time and resources available for analysis. It is useful to develop a quiver of decision-making strategies and to select the approach most useful for the situation at hand.


Considerations for selecting a decision aid

Figure 8:   The best decision-making approach depends on time available and the characteristics of the decision.


The best protection from bias comes from training, instituting protective policy fixes, using formal techniques for obtaining profession judgments, utilizing well-founded decision aids, and documenting the judgments and assumptions upon which choices are based. Parts 4 and 5 describe such a process specific to decision making for project portfolio management. As stated by American author and lecturer Ken Keyes, "To be successful in the business world we need to check our bright ideas against the territory. Our enthusiasm must be restrained long enough for us to analyze our ideas critically" [52].

Advice

See the advice for countering specific types of biases provided on the previous pages. Recommendations for a general organizational strategy for biases include:

  1. Provide educational sessions on biases for you and staff. Raise awareness. People can self-monitor to a degree, as long as they know what to monitor.
  2. Identify the specific biases likely to be most important. For project selection, problematic biases for most organizations include anchoring, supporting evidence bias, groupthink, motivational biases due to misaligned incentives, and overconfidence.
  3. Look for evidence of biased behavior. Some biased behaviors are more evident than others.
  4. Determine the root causes of identified biased behavior. Eliminating the causes of such behavior (e.g., motivational biases) can be the most effective ways to achieve change.
  5. Develop a corrective strategy to moderate the causes of biased behavior.
  6. Embed counters to biases in formal processes, such as project proposal templates, project evaluation methods, and capital-investment approval processes, to ensure that such techniques are used with consistency and regularity.
  7. Establish clear accountability. Incentives, accountability, and group decision making can encourage decision makers to think more deeply and carefully than they likely would if left to their own devices.
  8. Facilitate access to concise, clear, well-organized information.
  9. Build capability to support System 2 decision making. For example, develop cognitive aids: mnemonics, guidelines, algorithms, and so forth.
  10. Consistent with effective principles of visual nudging have key decision metrics presented graphically and visually in such a way as to clarify insights and understanding.
  11. Provide training for improved System 2 decision making, for example, value-focused thinking, fundamental rules of probability, distinguishing correlation from causation, and basic Bayes theory updating.
  12. The ultimate approach for debiasing is using quality decision-aiding tools.

References

  1. J. B. Soll, K. L. Milkman, and J. W. Payne, "Outsmart Your Own Biases," Harvard Business Review, May 2015.
  2. B. Fischhoff, "Debiasing," in D. Kahneman, P. Slovic, and A. Tversky (eds.), Judgment Under Uncertainty: Heuristics and Biases, 422-444, 1982.
  3. C. Bhagat and C. Kehoe, "High-Performing Boards: What's on Their Agenda?" McKinsey Quarterly, April 2014.
  4. T. D. Wilson and N. Brekke, "Mental Contamination and Mental Correction: Unwanted Influences on Judgments and Evaluations," Psychological Bulletin, 116, 117-142, July 1994.
  5. D. Kahneman, Thinking Fast and Slow. New York: Farrar, Straus, and Giroux, 2011.
  6. R. P. Larrick, (2004). "Debiasing," in D. J. Koehler and N. Harvey (eds.), Blackwell Handbook of Judgment and Decision Making, Malden, Blackwell Publishing, 316-337, 2004.
  7. J. Schwartz, D. Mochon, L. Wyper, J. Maroba, D. Patel, and D. Ariely, "Healthier by Precommitment," Psychological Science, 25, 538-546, 2014.
  8. G. Charness and U. Gneezy, "Incentives to Exercise," Econometrica, 77(30), 909-931, 2009.
  9. L. K. John, G. Loewenstein, A. B. Troxel, L. Norton, J. E. Fassbender, and K. G. Volpp, "Financial Incentives for Extended Weight Loss: A Randomized, Controlled Trial," Journal of General Internal Medicine, 26, 621-626, 2011.
  10. K. G. Volpp, G. Loewenstein, A. B. Troxel, J. Doshi, M. Price, M. Laskin, and S. E. Kimmel, "A Test of Financial Incentives to Improve Warfarin Adherence. BMC Health Services Research, 8, Article 272, 2008.
  11. K. G. Volpp, A. B. Troxel, M. V. Pauly, H. A Glick, A. Puig, D. A. Asch, and J. Audrain-McGovern, "A Randomized, Controlled Trial of Financial Incentives for Smoking Cessation," New England Journal of Medicine, 360, 699-709, 2009.
  12. S. W. French, "Pricing Effects on Food Choices," The Journal of Nutrition, 133, 841S-843S, 2003.
  13. B. Aczel, B. Bago, A. Szollosi, A. Foldes and B. Lukacs, "Is It Time for Studying Real-Life Debiasing? Evaluation of the Effectiveness of an Analogical Intervention Technique," Frontiers in Psychology. 6:1120, 2015.
  14. E. Kazemi and M. L. Franke, "Teacher Learning in Mathematics: Using Student Work to Promote Collective Inquiry," Journal of Mathematics Teacher Education, Springer, 7:203, 2004.
  15. J. S. Lerner and P. E. Tetlock, "Accounting for the Effects of Accountability," Psychological Bulletin, 125(2), 255-275, March 1999.
  16. H. R. Arkes, "Costs and Benefits of Judgment Errors: Implications for Debiasing," Psychological Bulletin, 110(3), 486-498, November 1991,
  17. N. Epley, "A Tale of Tuned Decks? Anchoring as Accessibility and Anchoring as Adjustment," in D. J. Koehler and N. Harvey (eds.), Blackwell Handbook of Judgment and Decision Making, Malden: Blackwell Publishing, 240-257, 2004.
  18. C. Thaler and C Sunstein, Nudge: The Gentle Power of Choice Architecture, New Haven, Yale, 2008.
  19. M. Halpern, Politics of Social Change: In the Middle East and North Africa, Princeton University Press, December 8, 2015.
  20. E. J. Johnson and D. Goldstein, "Do Defaults Save Lives?" Science 302(5649), 1338-1339, November 2003.
  21. E. J. Johnson et. al, "Beyond Nudges: Tools of a Choice Architecture," Marketing Letters, 23(2), 487-504, 2012.
  22. A. Acquisti, L. Brandimarte, and L. Loewenstein, "Privacy and Human Behavior in the Age of Information," Science, 347(6221), 509-514, January 2015.
  23. S. Benartzi, J. Beshears, K. L. Milkman, C. R. Sunstein, R. H. Thaler, M. Shankar, and S. Galing, "Should Governments Invest More in Nudging?" Psychological Science, 28(8), 1041–1055, 2017.
  24. S. Bhargava and G. Loewenstein, "Behavioral Economics and Public Policy 102: Beyond Nudging," American Economic Review, 105(5), 396-401, May 2015.
  25. J. S. Downs, G. Loewenstein, and J. Wisdom, "Eating by the Numbers," The New York Times November 13, 2009.
  26. R. P. Larrick and J. B. Soll, "The MPG Illusion," Science, 320, 1593-1594, 2008.
  27. I. P. Levin and J. Gaeth, "How Consumers Are Affected by the Framing of Attribute Information Before and After Consuming the Product," Journal of Consumer Research, 15(3), 374–378, December 1, 1988.
  28. R. Trudel, K. B. Murray, S. Kim, and S. Chen, "The Impact of Traffic Light Color-Coding on Food Health Perceptions and Choice," Journal of Experimental Psychology, 21(3), 255–275, 2015.
  29. B. Chu, "What Is ‘Nudge Theory’ and Why Should We Care? Explaining Richard Thaler's Nobel Economics Prize-Winning Concept," Independent, October 9, 2017.
  30. B. Fischhoff, "For Those Condemned to Study the Past: Heuristics and Biases in Hindsight." in D. Kahneman, P. Slovic, & A. Tversky (Eds.), Judgment Under Uncertainty: Heuristics and Biases, Cambridge University Press, 1982.
  31. S. Moran, Y. Bereby-Meyer and M. Bazerman, "Stretching the Effectiveness of Analogical Training in Negotiations: Learning Core Principles for Creating Value," Negotiation & Conflict Management Research, 1(2), 99-134, 2008
  32. R. E. Nisbett, G. T. Fong, D. R. Lehman, and P. W. Cheng, "Teaching Reasoning," Science, 238(4827), 625–631, October 10, 1987.
  33. B. Aczel, B. Bago, A. Szollosi, A. Foldes, and B. Lukacs, "Measuring Individual Differences in Decision Biases: Methodological Considerations," Frontiers in Psychology, 6(1770), November 19, 2015.
  34. B. Aczel, B. Bago, A. Szollosi1, A. Foldes, and B. Lukacs, "Is it Time for Studying Real-Life Debiasing? Evaluation of the Effectiveness of an Analogical Intervention Technique," Frontier in Psychology, August 4, 2015.
  35. C. R. M. Mckenzie and M. J. Liersch, (2011) "Misunderstanding Savings Growth: Implications for Retirement Savings Behavior," Journal of Marketing Research, 48, November 2011.
  36. H. E. Hershfield, D. G. Goldstein, W. F. Sharpe, J. Fox, Y. L. Yeykelis, L. L. Carstensen, and J. N. Bailenson, "Increasing Saving Behavior Through Age-Progressed Renderings of the Future Self," Journal of Marketing Research, 48(SPL), S23–S37, November 1, 2011.
  37. T. Mussweiler, F. Strack, and T. Pfeiffer, "Overcoming the Inevitable Anchoring Effect: Considering the Opposite Compensates for Selective Accessibility, Personality and Social Psychology Bulletin, 26(9), 1142-1150, 2000.
  38. C. K. Morewedge, "How a Video Game Helped People Make Better Decisions," Harvard Business Review, October 13, 2015.
  39. R. W. Fuller, "What is Rankism and Why to We "Do" It?" Psychology Today, 25 May 2011.
  40. J. Ford, R. Henderson, and D. O'Hare, David, "The Effects of Crew Resource Management Training on Flight Attendants' Safety Attitudes," Journal of Safety Research, 48 February 2014.
  41. S. Jackson, "Architecting Resilient Systems: Accident Avoidance and Survival and Recovery from Disruptions," in A. P. Sage (eds.) Wiley Series in Systems Engineering and Management John Wiley & Sons, Hoboken, NJ, 2010.
  42. S. Jackson, "Cognitive Bias: A Game Changer for Decision Management?", Insight, 21(4) December 2018.
  43. R. T. Clemen, "Combining Forecasts: A Review and Annotated Bibliography," International Journal of Forecasting, 5(4), 559-583, 1989.
  44. D. Kahneman and D. Lovallo, "Timid Choices and Bold Forecasts: A Cognitive Perspective on Risk and Risk Taking," Management Science, 39, 17-31, 1993.
  45. M. W. Merkhofer, "Assessment, Refinement, and Narrowing of Options," Chapter 8, Tools to Aid Environmental Decision Making, V. H. Dale and M. R. English, eds., Springer, New York, 1998.
  46. This figure is similar to the approach and corresponding figure in R. Howard, "The Foundations of Decision Analysis Revisited," Chapter 3 in Advances in Decision Analysis, W. Edwards, R. Miles, and D. von Winterfeldt, eds., Cambridge University Press, 1999.
  47. M. Lewis Moneyball, paperback ed., W. W. Norton & Company, 2011
  48. R. J. Jagacinski and J. Flach, Control Theory for Humans: Quantitative Approaches to Modeling Performance, CRC Press, 318-319, 2003.
  49. R. A. Howard and J. E. Matheson, eds., Readings on the Principles and Applications of Decision Analysis Menlo Park CA: Strategic Decisions Group, 1984.
  50. R. L. Keeney, Value-Focused Thinking: A Path to Creative Decisionmaking, Harvard University Press, 1992.
  51. R. T. Clemen and T. Reilly, Making Hard Decisions (3rd ed.). Cengage Learning, 2013.
  52. K. Keyes, Jr., Taming your Mind, Love Line Books, 1991.