Lee Merkhofer Consulting Priority Systems
Implementing project portfolio management

Why Isn't Everyone Doing This Already?

As this paper has described, methods are available for addressing all of the reasons organizations choose the wrong projects, including quantitative methods for identifying projects on the efficient frontier. Some organizations that regularly make high-stakes, project-selection decisions (e.g., some financial institutions, oil and gas exploration companies, pharmaceutical companies, some high-tech companies, the military and some other government agencies) are already using sophisticated versions of these methods. But, most organizations are not. A fair question is, "If what you've described is so great, why isn't everybody already doing it?"

One practical reason why relatively few organizations are using these techniques is that the relevant models and mathematical operations require sophisticated computer calculations. It is only recently that high-powered computer capability has become widely available, and more recent still that managers have begun to feel comfortable using computers directly in support of their work.

In some organizations, resistance to formal, analytic methods may be politically motivated. Some executives may fear any change that threatens power systems they have worked long and hard to create. A system that causes decisions to be based more on fact and reason will mean that political pressures and false urgencies will be less effective tools for steering the organization. From the personal perspective of a politically savvy executive, particularly one that believes he or she is more able than others to discern what is best for the organization, it may make perfect sense to resist any attempts to adopt analytic methods.

Most organizations, however, are under increasing pressure to find ways to do more with their limited resources, and many managers are facing growing demands to explain the basis for their decisions. Thus, it would seem increasingly unlikely that a few self-serving individuals could successfully discourage their organization from adopting proven analytic solutions. Yet, most organizations continue to be slow in adopting best-practice methods. There must be more fundamental reasons that explain the resistance. Indeed, if what I've said in this paper is true—that organizations may be obtaining only 60% of the value that could be derived from using best-practice project portfolio management—the barriers to adopting such methods must be significant.

I believe that there are 5 impediments that cause people to resist best-practice methods (Figure 53): (1) the prevalence of misinformation, (2) the fact that leaders may take misplaced pride in intuitive decision making, (3) fear of complexity, (4) a belief that there may be too much uncertainty to justify sophisticated methods, and (5) under appreciation of the predictive power of judgment. Understanding these barriers can help you to develop effective strategies for bringing best practice methods to your organization.


Barriers to adopting best-practice

Figure 53:   Barriers to adopting best-practice, analytic, project portfolio management.



Misinformation

In a recent article, Jeffrey Pfeffer and Robert Sutton address the question of why more companies aren't using "evidence-based management," which they define as the conscientious, explicit and judicious use of current best evidence in support of decision making [3]. Instead, companies seem to repeatedly adopt, then abandon, one ill-supported business fad after another. There exists, they point out, "a huge body of peer-reviewed studies-literally thousands...that although routinely ignored, provide simple and powerful advice about how to run organizations. If found and used, this advice would have an immediate positive effect..."

The failure of most businesses to use sound management science, they argue, is largely due to misinformation. They offer managers these warnings:

"People are trying to mislead you. Because it's so hard to distinguish good advice from bad, managers are constantly enticed to believe in and implement flawed business practices. A big part of the problem is consultants, who are always rewarded for getting work, only sometimes rewarded for doing good work, and hardly ever rewarded for evaluating whether they have actually improved things."

"You are trying to mislead you. People want easy solutions, but applying practices based on theory generally takes more effort. It requires, "...a willingness to put aside convenient half-truths and replace these with an unrelenting commitment to gather the necessary factors to make more informed and intelligent decisions." Simplistic approaches are popular because they do not force you to think very hard or to think in new ways.

Unless you are an expert in the relevant theories, it is tough to distinguish sound methods from snake oil. Managers don't have the time to research all of the claims that land on their desks. Partial truths are the most difficult to defend against. Thus, concepts like balance and strategic alignment catch on, even though there is no defensible argument or evidence for claiming that they result in better project choices or improved performance for the organization.

To quote Ronald Howard, "In decision-making, as in many other pursuits, you have a choice of doing something the easy way or the right way, and you will reap the consequences"[4]. Checking the claims of those who are trying to sell you something may take effort, but it can save you from making serious mistakes. Check references. Be suspicious. Ask to see evidence to support claims. Get an objective opinion from an independent expert in the field.

Misplaced Pride in Intuition

In an article entitled "A Brief History of Decision Making," Leigh Buchanan and Andrew O'Connell characterize the current era in which many leaders seem to take pride in making decisions based on intuition as a "romance of the gut" [5]." The popular view is that real leaders don't need analysis, "Intuition is one of the X factors separating the men from the boys" [6]. Recent years have seen two best-selling books on this theme. Psychologist Gary Klein's book, Intuition at Work, advises managers to "trust your gut"[7]. Similarly, columnist Malcolm Gladwell's best-selling book Blink argues that instantaneous decisions are often better than those based on lengthy rational analysis [8].

Behavioral psychologists Robin Hogarth and Paul Schoemaker provide a critical review of Blink [9]. They observe that Gladwell's faith in intuition ignores the vast literature on biases (see Part 1 of this paper). They provide counterexamples to Gladwell's examples, and demonstrate that some of Gladwell's own case studies seem to imply the opposite of his conclusions. Gladwell himself acknowledges that intuition is not always the best approach, and his book provides an example of how an analytic model is proving more effective than un-aided judgment in an urban hospital.

Author Barry Anderson observes that it "takes courage to be rational" [10]." Analysis can show that what we believe is not consistent with evidence and logic. Also, "analysis is a great leveler of hierarchy." If the decision is going to be made by the facts, then everyone's facts, assuming they are relevant and accurate, are equal. Unlike intuition, analysis can be learned. Analysis changes power dynamics, replacing formal authority, reputation, and intuition with information, data, and logic. This means that senior leaders, often venerated for their wisdom and decisiveness, may lose some stature if their intuitions are replaced by analysis.

Basing decision on intuition may be easy and attractive, but leaders need to decide what course is really in their best interests. Do they want to use intuition and avoid analysis that can prove them wrong, or do they want to use all effective means to ensure that their organizations actually perform well?

Fear of Complexity

For some managers, trying to select optimal project portfolios may seem too complicated to tackle. Psychologists identify fear of complexity as one of the key pitfalls that prevent people from overcoming important problems. They point out that the perception of complexity is reduced, however, when people use information processing structures (i.e., description languages) that provide a good fit to the complexities they encounter.

Systems modeling, the foundation for the methods described in this paper, is a language for describing and understanding complex problems. Models break a complex problem down into its individual pieces. The critical components are sorted out, identified, and analyzed separately. Computers perform the required synthesis at the end. Thus, systems modeling is the means for breaking down and overcoming complexity. As long as the concepts are understood, the fact that the math may be difficult is not really an issue—computers can handle the math.

Admittedly, systems modeling and the related methods that are described in this paper including multi-attribute utility analysis, probabilistic analysis, and risk tolerance, can themselves seem complex. Remember, though, that the most sophisticated tools need not be applied in all situations. If projects do not involve significant risks there is no need for probabilities and concepts like risk tolerance. More critical and difficult decisions require more sophisticated methods.

I urge organizations to follow the often quoted advice of Albert Einstein, "Seek the simplest possible solution, but no simpler." In particular, use methods that get the basic concepts right (e.g., the end goal is to create value, not balance). With learning and familiarity that come from experience, the appropriate methods will no longer seem complex.

Discomfort with Uncertainty

People would much prefer analysis to tell them what will happen, not what might happen. Thus, there is the perception that analysis is less useful in situations that are highly uncertain. Although there are well-established theories and techniques for optimizing decisions involving uncertainty (e.g., decision analysis), these are not topics about which most members of the public, or even most managers, are very familiar. Many managers assume that the great uncertainty in the costs and benefits of projects means that there is little value to applying sophisticated techniques to prioritize them.

My experience is that it doesn't take much education for skeptics to agree that sophisticated methods based on probability can significantly improve decisions involving uncertainty. For example, roughly 40 years, ago, probabilistic analysis was used to "beat" the game of blackjack. The sticking point is whether the same methods have merit when probabilities must be based on models and judgment rather than purely on "objective data."

The following checklist of questions helps me decide whether I need to quantify judgmental uncertainty:

  1. Is the uncertainty significant? If so, assessing a range of values for the uncertain quantity will make more sense, and be easier, than specifying a single-number, best guess.
  2. Does the uncertainty make a difference? Try varying the uncertain quantity across the range of possibilities. The uncertainty only matters if you would want to make decisions differently depending on the actual value.
  3. Do the experts believe some possibilities are more likely than others? If so, a probability distribution can be chosen to quantify those beliefs. I've never encountered a situation where the experts didn't have relevant beliefs, but if I did, there would be good reason to quantify uncertainty using a uniform probability distribution, which assumes that all possibilities are equally likely.

Once uncertainties have been expressed as probability distributions, a probabilistic analysis is usually not much harder than the corresponding analysis without probabilities. (It only becomes more difficult if the experts know a great deal, in which case such knowledge may need to be represented in more complex models that necessitate more sophisticated forms of analysis.)

The existence of uncertainty does not undermine the usefulness of probabilistic methods. On the contrary, it enhances their usefulness. When significant uncertainties are present, only a systematic and rigorous approach can produce an accurate understanding of risk and support a sound logic for making risky decisions.

Under-Appreciation of the Power of Judgment

"But...," people say, "Probabilities obtained in this way are subjective!" Actually, everything associated with decision-making is subjective, but in the interest of space I won't get into those arguments!

People sometimes confuse subjectivity with bias. As demonstrated in Part 1 of this paper, estimates based on judgment are often biased. However, formal methods are available for mitigating most biases. In the area of probability assessment in particular, considerable effort has been devoted to developing techniques for eliciting probability judgments that accurately reflect the beliefs of those who provide them.

Organizations that eschew analysis because of its dependence on subjective judgment are, in my opinion, under-utilizing their most important resource. Consider the concept known as the "knowledge-based theory of the firm" [10]. The theory argues that knowledge is the only strategically important firm resource. Other resources, like raw materials and electric power, are available at essentially the same prices to all competitors. Knowledge is the only resource that can provide a real advantage. According to the theory, fundamentally what firms do is "apply knowledge to the production of goods and services."

Since knowledge is held by individuals and not the organization, "the central role of the enterprise and its management is to integrate distributed knowledge and make it usable" [11]. Models and probabilities provide the best-known means for doing this. Models capture in a clear and useful fashion people's understanding about what makes the business successful. Probabilities encode beliefs about key uncertainties represented within the models in a precise, transparent way that can be understood by others and that can be used by the models to develop optimal decision strategies.

Although judgmental probabilities are indeed subjective, it is important to appreciate that they are not arbitrary. If a manager is using probabilities property and says there is a 25% chance that the project will go over budget, that manager is saying that the degree of confidence in achieving the budget is the same as randomly selecting a red ball from an urn containing one red ball and three white balls. Thus, subjective probability is related to an objective reality. Expressing uncertainty as a probability gives a much more precise and useful statement than saying "it's uncertain." Judgmental probabilities can be processed to derive logical inferences. Furthermore, judgmental probabilities can be calibrated to experience. If there is ample evidence that only one-fourth of projects come in on budget, then, presumably, others will have more confidence in the 25% probability judgment

Predictive Markets

Are subjective probabilities any good? Undoubtedly, the main reason that subjective probabilities aren't used more often is the mistaken belief that they have little merit. But, awareness of the predictive power of judgment is now gaining a significant boost based on the remarkable and well-publicized success of predictive markets. Around for years, these markets attracted lots of attention after correctly calling the close 2004 presidential election (and, also, when they were summarily rejected by the government as a method for predicting terrorist attacks).

In case you haven't heard, predictive markets are betting markets where participants buy and sell shares of financial assets whose final values are tied to the outcomes of specific uncertain events (e.g., the outcome of an election). In effect, the market generates a price for the uncertain event that reflects a consensus probability that the event will occur. One of the oldest and most famous is the University of Iowa's Iowa Electronic Market, which, since 1988, has been predicting the results of American presidential elections with more accuracy than polling services. A similar market correctively predicted all of 2005 big category Oscar winners. [12]

Even markets where players bet bragging rights rather than money appear to result in good forecasts. According to one study, web-based betting competitions accurately predict movie box office returns, the winners of formula one racing events, and future developments in science and technology [13]. Predictive markets work because they produce forecasts in a way that effectively combines the knowledge of the many participants while avoiding most of the biases identified in Part 1 of this paper.

Seeing opportunity, companies are beginning to create their own predictive markets, focusing on specific uncertainties that they want quantified. Hewlett-Packard, for example, has used internal markets to forecast sales and Eli Lilly has used them to predict the success of drug research [14]. Google recently reported that it uses predictive markets internally for generating strategic insights. The company compared the prices for events with the frequency with which the events occurred (if the prices are correct, events priced at 10 cents should occur about 10 percent of the time). The results were remarkably close [15].

The criteria for establishing a successful predictive market are the same as the conditions that decision analysts have established for assessing subjective probabilities:

  1. The uncertain quantity must be precisely and unambiguously defined (see the earlier discussion on the clairvoyant test).
  2. Participants must have some knowledge of the relevant subject matter. (It has been observed, for example, that a predictive market that allowed the general public to bet on the veracity of "String Theory" in particle physics would not be of much value.)
  3. A motivation must exist for people to express what they really believe will happen (not what they would like to happen or what they think others want to hear).

Perhaps awareness of the success of predictive markets will cause more organizations to consider using analytic methods that properly rely heavily on judgment to evaluate and prioritize projects.