The best way to understand project risk or project deferral risk is to characterize the risk by describing the range of possible outcomes, estimating when they will occur (risk timing), and assessing probabilities. If relevant data are available (e.g., as might be the case for system failure probabilities for evaluating reliability maintenance projects), probabilities for characterizing risks can be derived using statistical analysis. In the absence of such data, probabilities must still be assigned, and it makes sense to do so directly based on professional judgment.
Although quantifying risks requires more inputs to describe proposed projects, note that the additional inputs need not be very complex. In the case of unlikely, (discrete) risk events, it is far less time-consuming yet normally entirely adequate to forsake precision and seek only rough, order-of-magnitude estimates of probabilities and consequences (e.g., is the probability between one-chance-in-one-hundred and one-chance-in-one-thousand?). Likewise, if some aspect of a project's performance is uncertain (a continuous risk), instead of obtaining only a middle-value, point estimate, get a range of possible values (e.g., a 90% confidence interval) as well as a mean or most-likely value. (As indicated in the section of this paper on errors and biases, techniques should be used to guard against overly narrow ranges caused by overconfidence.) With practice, it takes no more time to specify an order-of-magnitude probability or range than it does to generate a single point estimate. The necessary probabilities can be generated based on rough estimates or from a range and a mean or most-likely value.
The easiest uncertainties to quantify are those associated with random events whose mean rates of occurrence can be measured, such as weather, accident rates, and commodity prices. Limiting risk quantification in this way, however, can create a sense of lack of urgency. For example, a week before hurricane Katrina struck, New Orleans hosted an offshore-drilling conference that included a panel discussion entitled, "What Has the Industry Learned From Ivan" (Hurricane Ivan had struck the previous September). The lesson was that rigs needed to be much better secured. The industry, however, had not yet made any changes prior to Katrina. The engineering approach to risk assessment told them that hurricanes of this size occur infrequently, which led them to believe that there was plenty of time before the next major storm hit.
Based on best-professional judgment, probabilities can be assigned to uncertainties for which no frequency data exists. If you can imagine an event, you can assign a probability to it, if not in absolute terms—0.01%, 1%, 10%—then relative to another event whose probability can be measured. (Before you dismiss the predictive value of subjective probabilities, read the section in Part 7 on predictive markets).
Libraries of pre-generated probability distributions can be created to describe commonly encountered risks. For example, Chevron is developing a library of accident probability distributions for analyzing capital investment projects at oil terminals, and Shell has developed a library of distributions for hydrocarbon volumes for oil exploration projects. Generating such probability libraries helps ensure consistency and facilitates auditing the assignments over time. The cover story for an issue of the Journal ORMS Today argued that companies need to consider the prospect of a Chief Probability Officer, someone with the job of managing the probabilities distributions assigned to support project portfolio management .
Once probabilities have been assigned, mathematical reasoning can be used to avoid many of the errors and biases described in Part 1 of this paper. For example, you can calculate how the number of observations affects the accuracy of estimates (to avoid small sample bias) and how the conditions required for an event affect the event's probability (to avoid conjunctive bias). Also, a method known as Bayes' Theorem can be used to calculate how a probability should be revised or updated as new information becomes available.
The amount of uncertainty caused by project risks and the specific project outcomes that are impacted may be used to better estimate what hurdle rates should be used and the types of benefits to which they should be applied. Quantifying uncertainty also allows more sophisticated methods (such as risk tolerance, explained in the next subsection) to be employed to account for risk aversion.
Project Managers Benefit from Using Probabilities to Describe Uncertainties
Although project managers may initially feel uncomfortable with probabilities, my experience is that this group can benefit significantly from moving away from using artificial point estimates. The following is a summary of an example devised by Mark Durrenberger of Oak Associates making this point :
Imagine that a project manager is asked to complete a project in 3 weeks. Suppose the project manager feels that this estimate is unrealistically optimistic; that everything would have to go just right to make the deadline. The project manager may feel apprehensive about going to the project sponsor to address the problem. It may not be easy to explain why an optimistic, aggressive project schedule isn't a good one.
Suppose, instead, that the project manager estimates as a range the time required to complete each project step. Those ranges can be combined (by adding the means and variances) to determine the probability of completing the effort within any specified time. Rather than feeling "at the mercy" of the sponsor, the project manager can now say, "I understand your desire to complete the project within three weeks. However, my calculations suggest that we have less than a 5% chance of meeting that deadline."
The sponsor will want to know more, including how the probability estimate was obtained. This gives the project manager the opportunity to discuss the realities of the job and to negotiate tradeoffs (like providing more resources or eliminating some project deliverables so as to increase the likelihood of meeting the desired schedule).
Note that specifying ranges is not a license for the project manager to make baseless claims. Over time, performance can be compared with range estimates. A project manager whose performance routinely beats the means of his specified uncertainty ranges, for example, will be exposed as one who pads estimates.
Quantifying Uncertainty Over Value
Importantly, if probabilities have been assigned to the uncertainties associated with key risks, including scenarios, those probabilities can be propagated through the decision model (described in the previous part of this paper) to derive the uncertainty over the various benefits and total value of the project. This can be done using Monte Carlo analysis or decision trees.
Monte Carlo analysis is a form of simulation for investigating the uncertain behavior of a physical system. Typically, the system is represented by a mathematical model with uncertain parameters or inputs. The uncertainties are described by probabilities. Monte Carlo analysis involves selecting sets of inputs in accordance with the specified probabilities, executing the model, and recording the model output. Since the specific inputs that are selected for any "trial" are generated randomly (according to probabilities), the process is a little like rolling dice (hence the name). If enough trials are conducted, a frequency plot of the model output shows the shape of the probability distribution over the combined outcome.
By applying Monte Carlo analysis to the project decision model, a probability distribution, or "risk profile," can be generated (Figure 31). You can do sensitivity analyses wherein only one die is rolled (only one uncertainty is allowed to vary) while keeping the others fixed. In this way, you can see which uncertainties (risks) have the biggest influence on project value (and focus energies accordingly). The results can be used to investigate changes that might be made to reduce risk. For example, if the distribution says there's a 50 percent probability the project will run a month late, you might decide to build an extra month into the schedule. Using the decision model and Monte Carlo analysis, you can generate a risk profile for the value of each of your proposed projects.
Figure 31: Using Monte Carlo analysis to quantify uncertainty in the value delivered by a project.
Decision trees and event trees represent another means for generating probability distributions over project value. Whereas Monte Carlo analysis excels at simulating what happens when many risks and other uncertainties are in play at once, a decision trees and event trees are more effective in simulating either-or situations and the sequential uncertainties that follow decisions.
As illustrated by the example in Figure 32, a decision tree is a graphic tree structure composed of decision nodes and chance nodes. The order of the nodes (from left to right) in the tree corresponds to sequence in which uncertain information is anticipated to be revealed and decisions must be made. A decision tree without decision nodes is sometimes called an event tree.
Figure 32: Decision tree for evaluating a project with uncertainty.
Branches emanating from decision nodes correspond to alternatives available at points of decision, and branches from chance nodes represent the possible outcomes to risks and other uncertainties, with associated probabilities.
If the decision model is used to compute a project value corresponding to each path through the tree, the values can be displayed at the tree's end points (the probabilities of each end point value are the product of the probabilities along the path to the end point). The tree can then be analyzed to determine the risk profile (specifically, by computing a cumulative probability distribution over value by computing the probabilities of all values less than each possible value). Decision trees can also be used to compute a risk-adjusted value for each project, using the method or risk tolerance described in the next subsection.
One advantage of decision trees and event trees is that they are convenient for exploring how uncertainties evolve over time (as in Figure 33).
Figure 33: Characterizing risks shows how uncertainties evolve over time.
Risks of the Project Portfolio
Another important reason to consider quantifying project risks is it that the overall risk of the project portfolio can then be determined. Conducting a portfolio of projects reduces risks through risk diversification (hedging) in the same way that an individual can reduce financial investment risks by investing in a portfolio of diversified stocks. In a stock portfolio there is a limit to how much diversification can reduce risk. This limit is determined by the degree to which stock prices tend to move together; that is, the degree to which the prices of the stocks in the portfolio are statistically dependent or "correlated" with overall market movements. To understand the risks of a stock portfolio, it is necessary to measure these correlations (this is typically done using the correlation statistic called "beta").
In a project portfolio, as illustrated previously, there are risks (e.g., external risks) that impact multiple projects simultaneously. So, in exactly the same way as with stocks, a project portfolio is not as effective at reducing correlated risks. The only way to estimate accurately the risks of alternative project portfolios, and thereby choose projects that collectively produce maximum value at minimum risk, is to quantify these project risks, including statistical dependencies.
Unlike the case for many financial portfolio risks, there is typically no direct way to measure the statistical dependence among the risks of conducting potential projects. However, models can be constructed to represent the relationships. The key is to identify which risks simultaneously affect which projects, and to use the model to appropriately relate the projects to one another.
Failure to account for risks that simultaneously impact numerous investments can have devastating consequences. The 2008 financial crisis provides many illustrations. For example, insurance giant American International Group (AIG) used models to assess the risks associated complicated contracts called credit-default swaps, which totaled more than $400 billion. According to an article in the Wall Street Journal, AIG knew their models left out certain market forces and contract terms, but neglected to expand the models . In retrospect, it was clear that the failure to address the common threats caused AIG to vastly underestimate risk and to continue to purchase the dangerous contracts. Were it not for the government bailout, AIG would have collapsed.
Fully characterizing project and project-deferral risks shows whether the assumptions required for using hurdle rates are satisfied and supports the selection of project-specific hurdle rates. It also allows the use of another approach involving the concept of risk tolerance.