Lee Merkhofer Consulting Priority Systems
Implementing project portfolio management

"Making the utility function concept work for an organization interested in improving its ability to select the right projects is largely a matter of obtaining the correct mathematical expression for the organization's utility function."

Creating a Consequence Model for Project Selection

Previous pages have described the first eight steps of my 12-step, process for constructing a project selection decision model. The purpose of this page is to clarify two of the remaining steps— Creating a model for estimating project consequences and deciding whether or not the model needs to explicitly quantify uncertainty. Figure 36 identifies these steps and shows where they fit into my 12-step process.


Steps for creating a project selection model

Figure 36:   Steps for creating a project selection decision model.


Deciding Whether to Model Uncertainty

By this step in the process of constructing a project selection decision model, whether or not to explicitly model uncertainty is a choice that will almost certainly already have been made, at least in the mind of the analyst. Deciding to explicitly account for uncertainty increases the complexity of the modeling process. It also means that generating inputs for the model will require more effort. If uncertainty is modeled, users will need to provide estimates of the magnitude of those uncertainties, at minimum uncertainties that are project specific (uncertainties that are the same regardless of project choices will most likely be specified ahead of time). If the selected model form is deterministic, there is no need to generate estimates of project-related uncertainties. Also, the model analytics must be more complex because modeling uncertainty makes it necessary to implement a method for propagating input uncertainties through the model in order to deduce the uncertainty over project value. In short, a decision to include uncertainty in the project selection model increases the time and effort required to construct the model as well as the effort needed to use the model in support of the prioritization process. Thus, it is unlikely that by this stage in the modeling effort that the analyst would not have a good idea about whether accounting for uncertainty is sufficiently important to justify a more complex probabilistic model.

Why the Step to Decide Whether to Model Uncertainties Appears So Late in My Project Selection Model Building Process

The reason that I've placed the step this late in the step-by-step process is that the choice could, if necessary, be delayed until this point without significantly changing the nature of the work conducted previously. in contrast, the decision cannot be made beyond this step because the subsequent steps, the design of the consequence model and the assessment of weights, will be conducted differently depending on whether uncertainty is modeled.

Postponing the decision of whether to model uncertainties to this point does have some advantages. In fact, decision analysts have long argued that it is efficient to first analyze a decision problem without considering uncertainty—this is referred to as conducting the deterministic phase of the analysis. The concept is that a model constructed without regard to uncertainties can be analyzed to guide decisions about whether or not uncertainties should be included, and, if so, which uncertainties to quantify.

The idea of the deterministic phase of a decision analysis is that once built, a deterministic model that relates project performance to project value can be subjected to sensitivity analysis. Through sensitivity analysis, the analyst can determine whether varying project performance estimates across a range of assumptions has much effect on the estimate of project value. It is also possible to conduct a value of information analysis that will set a bound on how much it would be worth to collect more information or to more accurately estimate uncertainties. If sensitivity analysis or value of information analysis indicates the project value is highly sensitive to existing uncertainties, then a case can be made for making the model probabilistic.

Deciding Whether to Model Uncertainty

Having worked with many organizations to design project prioritization models, I can say that it is rare to find that a probabilistic analysis makes a significant difference in the computations of project value or the way projects are prioritized, provided that the existing uncertainties relate to continuous risks only. Modeling project performance uncertainties of this type will cause project values to be slightly lower, particularly if the organization's risk tolerance is low (see Part 4 for an explanation of risk tolerances and the tolerances that might be considered low). Assessing and including uncertainties will always provide more understanding of project values and risks. However, I do not recommend including uncertainty in the project prioritization model if those uncertainties impact the accuracy of performance level estimates only, particularly if a project prioritization model will be the organization's first experience with a judgment-based decision support systems.

On the other hand, if there are discrete risks, especially the possibility of accidents that, if they occur, could lead to very serious outcomes such as a death or a very large financial loss to the organization, then my recommendation is that uncertainty and organizational risk tolerance be included in the model. Failure to account for organizational risk tolerance can impact the estimated value of projects that reduce such risks by several orders of magnitude. My risk demonstration is included for the purpose of making this point and encouraging organizations to include risk tolerance when prioritizing projects that reduce accident risks. It is essential in such instances that organizational risk tolerance be properly assessed and included in the model (see Part 4 for how to do this).


Consequences

Consequence Models

I use the term consequence model to refer to a model component that appears in nearly every well-designed project selection decision model. A consequence model is, as the name suggests, a model for estimating the consequences of conducting projects. The consequence model takes as its input project scores or whatever other estimates the user inputs to the model to characterize the project whose value is to be estimated. The outputs of the consequence model are the resulting performance levels or changes to performance levels that the value function or utility function needs to compute a value for the project.

The project consequence model will be either deterministic, if uncertainties are excluded from the model, or probabilistic, if uncertainties are included.

Simple versus Complex, Judgment-Based versus Data-Driven

In terms of computational complexity, a consequence model may be so minimal that it is hardly noticeable. On the other hand, the consequence model may be so large and computationally complex that it dwarfs the value function or utility function used in the model to convert consequences to the value of those consequences. (Each of these extremes is illustrated by an example below.) Regardless of the complexity of the consequence model, the model is essential in that it supplies the estimated impacts to organizational performance to be produced by conducting a project.

Depending on the extent to which hard data are available for characterizing the relationship between projects and project consequences, consequence models fall along a continuum ranging from highly quantitative, data-driven approaches to mostly qualitative, judgment-based approaches. Every approach has its own benefits and drawbacks, but you're not going to be able to obtain a model based on hard data if little or no hard data are available for predicting the consequences of your projects.

Consider Using Influence Diagrams to Design Consequence Models

Influence diagrams are useful for identifying both the factors to be represented in the consequence model and the relationships among the factors that ought to be quantified [7, 8]. Software programs are available that partially automate the process of converting a qualitative influence diagram into a quantitative model for estimating decision consequences [9, 10, 11].

Benefit Timing

It might seem that you can eliminate the need to create a consequence model by having the model user simply input to the model estimates of the performance of each project with respect to each objective. Models based on simple scoring rules often operate in this way. However, projects typically create impacts that begin at some future point in time (e.g., after the project is completed) and end at some future point of time (e.g., when assets created by the project are retired or replaced). If timing and duration of benefits are relevant to determining project value, as they almost always are, then you will want a consequence model that accounts for these considerations when computing project value. Thus, project consequence models usually provide as output a time series indicating the estimated impact on the time progression of organizational performance. Such models can be regarded as providing a mathematical simulation of the real world, or more specifically, the part of the world impacted by projects.

Example Model for Benefit Timing

To illustrate, suppose the decision is a choice by a government agency over alternative national health insurance programs. One objective for the decision might be to maximize the number of citizens who gain coverage under the program. The sub-model within the consequence model for measuring the degree to which this objective is met might estimate the annual number of new enrollees in the plan over some time period, say 10 years, during which time the plan might be assumed to remain operational. A very simple sub-model, shown in Figure 38, would be to assume the same number of new enrollees each year, beginning with the year the plan first becomes available. This time series of inputs would be used by the value model to measure the degree to which the project achieves, over time, the objective of maximizing the number of citizens who gain health insurance coverage under a selected program. This sub-model within the consequence model for generating the time series of new enrollees requires just 2 inputs: the year the plan would first accept enrollees and the average number of new enrollees to the plan each year.


Simple model for annual number of new health plan enrollees

Figure 38:   A simple consequence model.


Is this model adequate for providing one of the performance estimates needed for a health insurance program selection model, or should a more realistic model for the timing of new enrollees be used? Among other things, the answer depends on (a) whether accounting for the year-to-year variations in enrollees makes much difference for the computation of plan value (the sensitivity of project value to consequence model assumptions can be investigated using sensitivity analysis) and (b) whether there are more realistic, well-accepted model forms that could easily be used. To illustrate the latter possibility, an S-curve model is often used to represent market penetration for new products [12]. The S-curve model (Figure 39) fits empirical data that show that sales of new products typically follow three phases: introduction (where the product is beginning to be widely understood or appreciated), growth (when sales are increasing), and maturity (when sales level off).


A more complex model for annual number of new health plan enrollees

Figure 39:   A more realistic, S-curve model.


Being more complex, the S-curve model requires more inputs. In addition to the number of years until enrollment begins, the model requires: (1) the number of annual enrollments once the product has become well-established (the "saturation" level), (2) the year at which rapid growth initially begins ("hyper growth" time, assumed to be the number of years until sales reach 10% of the saturation level), and (3) the number of years over which fast growth continues ("takeover time"). Given these three inputs, the equation for the S curve will generate a time series estimate for the annual number of new enrollees similar to that depicted by the S-curve shape illustrated above.

Consequence Sub-Models

To build a consequence model for project selection, the analyst typically devises an equation or algorithm for estimating each of the model's performance measures. As a result, consequence models are often composed of interconnected sub-models. The job for each sub-model is to estimate one of the model's performance measures. If, as in the above example, the projects being prioritized are alternative national health insurance programs, then the S-curve for the annual number of new enrollees might be one of several sub-models constructed for the consequence model.

The sub models must be connected to ensure consistency in the assumptions that are common to each. In the above example, the new enrollee sub-model establishes the number of people who, each year, will have coverage under the plan. Therefore, other performance measures that depend on the number of people covered by the plan must utilize the prediction provided by the new-enrollee sub-model. If, for example, there is a performance measure for the number of people who obtain improved health care, then that sub-model must utilize the predicted numbers of people with coverage as specified by the new-enrollee sub-model. Likewise, any performance measures for the costs to insurance providers or the costs to the plan's enrollees must assume estimates for the number of people with coverage consistent with that produced by the new enrollee sub-model.

Constructing Consequence Models—Need for Subject-Matter Expertise

An obvious prerequisite for being able to construct a model for estimating how some system will respond to some proposed project is understanding of the system and the project. Continuing the above example, building a credible model for predicting the number of people who would enroll in a new health insurance plan requires participation from people knowledgeable about the health insurance market, including the attributes of a health insurance plan that determine the number of people who would purchase that plan. More generally, an effort to construct a consequence model for an organization's projects requires participation from "subject matter experts;" that is, people recruited from the organization with best understanding of the real-world systems within which the organization operates. For this reason, and because the analyst with the necessary mathematical model building skills may not have best-understanding of the relevant real-world systems, constructing a consequence model is typically a team effort.

Constructing Consequence Models—Need for Mathematical Modeling Expertise

In addition to needing the real-world knowledge of the systems relevant to predicting project consequences, the model-building team must be led by an analyst with the requisite mathematical and modeling skills. The basic mathematical building blocks for constructing consequence models are the same as those used in all other types of models. For example, a consequence model might include sub-models that are:

  • stochastic or deterministic
  • linear or nonlinear
  • differential equations, partial differential equations, or difference equations
  • logic models (e.g., fault trees)
  • parametric vs. non-parametric models
  • constrained or unconstrained
  • state space transition models
  • consumer choice models
  • game theoretic models
  • trees and latices

The skills needed to create mathematical models are particularly difficult to teach. Traditional mathematical skills, such as familiarity with the above types of mathematical relationships, is certainly useful, if not a necessary qualification. However, having mathematical skills alone doesn't ensure that an individual will have the ability to create useful models. Professor Richard Smallwood, who for many years taught a course at Stanford University entitled "The Art of Mathematical Modeling," likens modeling to "sculpting." He believes the best way to teach modeling skills is by establishing a "studio," where aspiring modelers can watch how "masters" work, practice, and have the models they create critiqued [13]. In the absence of studios for aspiring modelers, my advise is to seek out consequence models that others have created for performance measures similar to those selected for your organization.

Fortunately, existing sub-models for many types of common project impacts are easy to obtain. In particular, the methods for estimating project financial impacts are well-established, and most organizations have defined a business case model for quantifying project financial performance in terms of net present value (NPV). Oftentimes, though, I've found that the organization's prescribed methods for evaluating the financial performance of an investment are designed for application to larger projects that are typically better defined than many of the organization's other projects requiring prioritization. What is often needed is a streamlined approach for estimating the impacts of projects on the organization's cash flows. Creating a simplified method for approximately estimating each project's present value of incremental cash flows will require coordinating with the organization's finance department to ensure that the streamlined approach incorporates assumptions (e.g., discount rates) consistent with those used elsewhere by the organization for financial analysis of investments.

Other, easy to obtain sub-models for common business objectives include models for estimating R&D success, the impact of project characteristics on customer purchase decisions, the effectiveness of projects at reducing pollution, the productivity and reliability of business assets, and so on and so forth. Methods for estimating health and environmental performance are also well-established, though creating sub-models of these areas typically requires risk assessment expertise. Happily, for those who are not experts at creating mathematical models, the most common approach to building simple consequence models involves finding, selecting, customizing, and piecing together sub-models that have worked well in other applications.

Criteria for Selecting Models

As in other situations requiring a choice of a model type, selecting a type of model best suited to estimating project impacts on performance requires considering model strengths and weaknesses in turn. Criteria typically recommended for model selection include realism, capability, flexibility, ease of use, cost, and easy computerization [14].

Large, Complex Consequence Models

Consequence models, like other kinds of mathematical models, can become quite large. Basically, the more capital that an organization spends on projects and the more complex the system determining project performance, the larger and more sophisticated the consequence models are likely to be. In my experience, some of the largest and most complex consequence models are being used by transportation agencies, pharmaceutical companies, oil and gas companies, automotive manufacturers, and, especially, government agencies that fund environmental cleanup projects. In the case of transportation projects, for example, models are available for predicting the impacts of transportation system projects on traffic congestion, travel times, and pollutant emissions. Transportation models may simulate the paths of individual vehicles along a particular road, or operate at a more macroscopic level, representing, for example, the speed, flow and density of traffic in the various sections of a transportation network.

Simulations

Figure 40:   Simulated contaminant transport paths for the NTS.

To provide an example, the most complex consequence model I've encountered for prioritizing projects was a three-dimensional contaminant transport model developed for the Nevada Test Site (NTS) [15]. The NTS is the location where the U.S. conducted more than 800 tests of nuclear weapons. The tests left massive underground deposits of radioactive material, including carbon-14. cesium, plutonium, and tritium. The purpose of the priority system was to select projects for a $212 million dollar project portfolio for reducing contaminant transport uncertainties at the site, including uncertainty over the upper bound (95% confidence) estimate of the "contaminant boundary," defined as the maximum distance radioactive contaminants will travel over the next 1,000 years [16].

To prioritize projects, each project's expected reduction in the upper bound estimate of the contaminant boundary was computed [17]. This required a Bayesian analysis with Monte Carlo simulations of contaminant migrations used to generate probability distributions over the location of the contaminant boundary. The plot in Figure 40 to the right shows some of the simulated contaminant pathways generated using the consequence model. The team, including me, so underestimated the time required to do each simulation that it was necessary, on the weekend before the prioritization deadline, to commandeer all of the computers in the Nevada office of the prime contractor and run them simultaneously and continuously to produce the required consequence simulations.

Why Organizations Favor Using Consequence Models

Organizations would not be using sophisticated project consequence models if they did not perceive benefits from doing so. The main benefits cited for constructing and using consequence models are:

  • Decision focus: The models often provide the ability to explore different project designs, thus allowing for project optimization in addition to project prioritization.
  • Retained institutional knowledge: the model becomes a component of organizational knowledge infrastructure.
  • Greater accuracy: The model employs actual physical information, not just expert opinions.
  • Directed information gathering: As demonstrated by the NTS example, the model may often be used to prioritize information-gathering opportunities.
  • Traceable results: The model's logic and assumptions provide transparency for decision making, often useful for demonstrating to outsiders the depth and care taken when making important and potentially controversial choices.
  • Immediate response: The model often enables real-time responses to "What if?" questions
  • Continuous improvement: The model can be refined, updated, and enhanced as knowledge accumulates.

I apply a simple rule-of-thumb for answering the question of whether a model should be created to simulate the consequences of projects: If modeling makes specifying project consequences easer, more accurate, or produces greater confidence, then, by all means, consider constructing a model.

Bootstrapping

Although there are often good arguments for creating sophisticated consequence models, creating overly-complex, difficult to understand models is a common failure mode for priority system design. An important consideration for the development of a consequence model is the need to ensure that the individuals within the organization who use the model, especially decision makers, fully understand the model and are comfortable with its level of complexity.

One often recommended approach, sometimes referred to as bootstrapping, is to develop first a highly simplified and easily understood analysis that omits all but the most significant considerations [18]. Once the decision maker has gained confidence with the simple model, additional detail and sophistication can be added. However, even when a decision maker is supportive of increasing model sophistication, the understanding gained from further modeling can sometimes decrease if models become unnecessarily complex. The above referenced model for selecting information gathering projects for the NTS provides an example of exactly this possibility. An earlier prioritization of information gathering projects for a different segment of the NTS used a simpler version of the contaminant transport simulation model [19]. In that case it was fairly easy to understand the connection between the characteristics of a proposed project (e.g., drilling a groundwater monitoring well in a given location) and the resulting changes to posterior probability distributions for the contaminant boundary. The subsequent analysis, described above, used an expanded model for contaminant transport that included simulation of pathways within the vadose zone (above the groundwater table) [20]. With this more complex model, it was difficult for many members of the project team to understand the connection between specific projects and the corresponding posterior distributions for the location of the contaminant boundary.

Using oversimplified models, on the other hand, especially those that omit important aspects of the problem needed to provide good predictions, can also limit and confound understanding. The goal for model design is to hit the “sweet spot” of model sophistication, regardless of whether complexity is added sequentially to an overly simple model or collapsed from an initial, more highly detailed model [18].

References

  1. V. J. Neumann and O Morgenstern, Theory of Games and Economic Behavior, Princeton University Ppress, 2007.
  2. P. C. Fishburn, "Utility Theory for Decision Making," (No. RAC-R-105), Research Analysis Corp, McLean VA, 1970.
  3. A. G. Longley-Cook, "Risk-Adjusted Economic Value Analysis." North American Actuarial Journal, 2(1), 87-98, 1998.
  4. J. S. Dyer and R. K. Sarin,., "Measurable Multiattribute Value Functions," Operations Research, 27(4), 810-822, 1979.
  5. J. E. Matheson and A. E. Abbas, "Utility Transversality: A Value-Based Approach," Journal of Multi-Criteria Decision Analysis, 13(5-6), 229-238, 2005.
  6. J. S. Dyer and R. K. Sarin, "Measurable Multiattribute Value Functions," Operations Research, 27(4), 810-822, 1975.
  7. R. A. Howard and J. E. Matheson, "Influence Diagrams," Decision Analysis, 2(3), 127-143, 2005.
  8. M. W. Merkhofer, "Using Influence Diagrams in Multiattribute Utility Analysis—Improving Effectiveness Through Improving Communication," Influence Diagrams, Belief Nets and Decision Analysis 297-317, 1990.
  9. DPL Standard Manual, Syncopation Software, Inc., 2008.
  10. L. Chrisman, M. Henrion, and R. Morgan, Analytica Users Guide, Lumina Decision Systems, Inc., 2015
  11. P. McGinley, "Decision Analysis Software Survey," OR/MS Today, 39, 2012.
  12. V. Mahajan and R. A. Peterson, Models for Innovation Diffusion (Vol. 48). Sage, 1985.
  13. R. S, Smallwood, personal communication. November 13, 2016.
  14. J. B. Kadane and N. A. Lazar, "Methods and Criteria for Model Selection<" Journal of the American Statistical Association, 99(465), 279-290, 2004.
  15. U.S. Department of Energy, Nevada Operations Office, "Regional Groundwater Flow and Tritium Transport Modeling and Risk Assessment of the Underground Test Area, Nevada Test Site," NV, DOE/NV--477, Las Vegas, NV: Environmental Restoration Division, 1997.
  16. C. Zheng and G. D. Bennett, G.D., Applied Contaminant Transport Modeling (Vol. 2), New York: Wiley-Interscience, 2002.
  17. B. Deshler, H. Kieffel, M. Merkhofer, and S. Mishra, S., "Value of Information Analysis to Support Data Collection for Characterizing Radionuclide Transport at the Nevada Test Site," Proceedings, Managing Watersheds for Human and Natural Impacts, 1-12, 2005.
  18. K. K. Damghani, M. T. Taghavifard, and R. T, Moghaddam, "Decision Making Under Uncertain and Risky Situations," In Enterprise Risk Management Symposium Monograph Society of Actuaries-Schaumburg, Illinois (Vol. 15) 2009.
  19. "Value of Information Analysis for Corrective Action Unit No. 98: Frenchman Flat, Nevade Test Site," IT Corporation, Nevada, June 1997.
  20. "Value of Information Analysis for Corrective Action Unit 97: Yucca Flat, Nevada Test Site," IT Corporation, Nevada, April 1999.