"Making the utility function concept work for an organization interested in improving its ability to select the right projects is largely a matter of obtaining the correct mathematical expression for the organization's utility function." 
Previous pages have described the first eight steps of my 12step, process for constructing a project selection decision model. The purpose of this page is to clarify two of the remaining steps— Creating a model for estimating project consequences and deciding whether or not the model needs to explicitly quantify uncertainty. Figure 36 identifies these steps and shows where they fit into my 12step process. Figure 36: Steps for creating a project selection decision model. Deciding Whether to Model UncertaintyBy this step in the process of constructing a project selection decision model, whether or not to explicitly model uncertainty is a choice that will almost certainly already have been made, at least in the mind of the analyst. Deciding to explicitly account for uncertainty increases the complexity of the modeling process. It also means that generating inputs for the model will require more effort. If uncertainty is modeled, users will need to provide estimates of the magnitude of those uncertainties, at minimum uncertainties that are project specific (uncertainties that are the same regardless of project choices will most likely be specified ahead of time). If the selected model form is deterministic, there is no need to generate estimates of projectrelated uncertainties. Also, the model analytics must be more complex because modeling uncertainty makes it necessary to implement a method for propagating input uncertainties through the model in order to deduce the uncertainty over project value. In short, a decision to include uncertainty in the project selection model increases the time and effort required to construct the model as well as the effort needed to use the model in support of the prioritization process. Thus, it is unlikely that by this stage in the modeling effort that the analyst would not have a good idea about whether accounting for uncertainty is sufficiently important to justify a more complex probabilistic model. 

Strategy for Creating a Project Selection Decision ModelIf you've been reading along, at least since the page where I first described the concept of a project selection model, you probably don't need to read the summary I've placed in this side box. However, if you've been skimming the material on the previous pages, or, especially, if you've just landed on this page, reading this summary of key concepts, terminology, and assumptions will reduce the potential for confusion and save you time going forward, assuming your goal is to understand my recommended strategy for creating a project selection decision model. TerminologyOn this website, the terms performance measure and attribute mean the same thing. Also, the terms singleattribute value function and scaling function mean the same thing. I use both of these respective terms because you will see them both in the literature. Also, there seem to be contexts where each term better expresses its purpose. Value MaximizationA project selection decision model prioritizes the projects to be included in the organization's project portfolio based on the principle of value maximization. The model estimates the value of each project and then prioritizes the project according to the ratio of project value to project cost. The ratio of project value to project cost is the correct metric for prioritizing projects that are independent of one another. Consequence ModelProject value depends on the outcomes or consequences of conducting the project. In other words, the estimate of project value is the estimate of the value of the project's consequences (what obtaining those consequences is worth to the organization, or, if there is uncertainty, what obtaining the lottery over project consequences, is worth to the organization). For this reason, the project selection model must include a means for specifying the presumed project consequences. I refer to this submodel as a consequence model. The figure below illustrates the assumption that the consequence model is a submodel within the project selection model. Figure 36: A project selection model contains two submodels. Capturing the Preference Structure of an OrganizationThe totality of all of the value judgments relevant to choosing the projects for an organization might be termed the "preference structure" of the organization. A project selection decision model is based on the concept that the organization's preference structure can be deduced by questioning the organization's senior decision makers. The preference structure thus obtained is then represented within a model. Once constructed, the model can be used to identify (more quickly, consistently, and reliably) the choices that decision makers would make if they spent the necessary time analyzing the decision problem and were free from the influence of common biases and decisionmaking errors. Organizational ObjectivesThe value judgment having the greatest influence on project choices concerns the organization's objectives—the value an organization attributes to a project depends (or should depend) on the estimated degree to which doing the project helps the organization achieve its objectives. A project that appears valuable to one organization may not look at all attractive to another. A dance studio, for example, has different objectives than a police department, and that's the primary reason they choose different projects. A project selection model must be designed to capture the specific objectives of the organization for which the model is being created. Other Value JudgmentsAlthough objectives may be the most important of all the valuebased judgments relevant to the selection of projects, other value judgments also affect project choices. These other judgments include the preferences of the organization's decision makers for trading off performance relative to different objectives (How should achievement of our different objectives be weighted?) as well as on willingness to accept risk (What is our risk tolerance?). Utility TheoryThere is a theory, one that has been around for over 50 years, that underlies the approach recommended on this site for valuing projects. The theory is known as utility theory [1]. According to utility theory, all of the preference judgments needed to compute project value may be captured in a mathematical expression called a utility function. In order to obtain a utility function for an organization it is necessary to have the organization's senior executives participate in a formal, interviewlike process that requires them to express preferences for hypothetical project outcomes. It has been proven that the only requirement to ensure that a utility function exists is that decision makers agree to a few, easytoaccept, principles (axioms) for what it means to make rational choices [2]. Making the utility function concept work for an organization interested in improving its ability to select projects is largely a matter of obtaining the right mathematical expression for its utility function. Utility Functions versus Value FunctionsAn instance where a distinction between two related terms matters concerns utility functions. A utility function may be constructed for measuring project value when project performance is uncertain. In this case, the utility function produces a risk adjusted value for the project [3]. A utility function may also be constructed for measuring project value in situations where there is no uncertainty (or if uncertainty is ignored). In this case there is no need for the utility function to adjust estimated project value for risk. The nature of the two functions is different, and to distinguish them, a utility function that does not account for risk is called a value function, also called a measurable value function. A value function is a special case of the more general term, utility function [4]. Modeling StrategyFigure 37 identifies the steps I recommend for creating a project selection decision model. Figure 37: Recommended modeling strategy. These steps may differ from similar modelconstruction steps that you may read about elsewhere. In particular, regardless of whether uncertainty will be modeled, this strategy calls for deriving a value function to translate estimates of the performance achieved under a project into the value of the project. In my experience, most senior decision makers within organizations want the ability to compute an equivalent dollar value for their candidate projects (even though the computed value may not yet be adjusted for risk) and want to know how this project value breaks out into the various types of value defined by the organization's objectives. However, if for any reason dollar equivalents for projects are not desired, project value can be scaled and expressed in relative units. Following the computation of the value of candidate projects, the organization's risk tolerance can be used along with an exponential function to translate the value function into an exponential utility function from which risk adjusted project value may be calculated. This strategy (which runs counter to the strategy of seeking an additive utility function) has been advocated by a number of researchers who have argued for first obtaining a value function for subsequent conversion into a cardinal utility function because it simplifies the assessment process by minimizing the need to ask decision makers to express preferences over hypothetical choices among lotteries [5]. Preferential IndependenceThe key requirement for making my recommended strategy work is defining objectives and performance measures that are mutually, preferentially independent. If the performance measures are mutually preferentially independent, then the value function will have an additive form [6]. If the value function has an additive form, it can be specified by specifying a single attribute value function for each performance measure and a weight. If any of the objectives and associated performance measures aren't preferentially independent, then the value function won't be additive. If this is the case, you need to take another look at objectives and performance measures and try again. The only exception is a case where you can discern some specific, nonadditive (or partially additive) functional relationship among the objectives and their performance measures. This was the case in the example prioritization of transportation projects described on a previous page where it was obvious that a probability of success objective implies a multiplicative relationship in the value model. Once you've been able to establish a mathematical form for the value function that is mostly or entirely additive, you can begin to flesh out the function by specifying a singleattribute value function for each performance measure. Finally, and before you assign weights, you need to make the choice of whether to explicitly account for uncertainty and risk. Why the Step to Decide Whether to Model Uncertainties Appears So Late in My Project Selection Model Building ProcessThe reason that I've placed the step this late in the stepbystep process is that the choice could, if necessary, be delayed until this point without significantly changing the nature of the work conducted previously. in contrast, the decision cannot be made beyond this step because the subsequent steps, the design of the consequence model and the assessment of weights, will be conducted differently depending on whether uncertainty is modeled. Postponing the decision of whether to model uncertainties to this point does have some advantages. In fact, decision analysts have long argued that it is efficient to first analyze a decision problem without considering uncertainty—this is referred to as conducting the deterministic phase of the analysis. The concept is that a model constructed without regard to uncertainties can be analyzed to guide decisions about whether or not uncertainties should be included, and, if so, which uncertainties to quantify. The idea of the deterministic phase of a decision analysis is that once built, a deterministic model that relates project performance to project value can be subjected to sensitivity analysis. Through sensitivity analysis, the analyst can determine whether varying project performance estimates across a range of assumptions has much effect on the estimate of project value. It is also possible to conduct a value of information analysis that will set a bound on how much it would be worth to collect more information or to more accurately estimate uncertainties. If sensitivity analysis or value of information analysis indicates the project value is highly sensitive to existing uncertainties, then a case can be made for making the model probabilistic. Deciding Whether to Model UncertaintyHaving worked with many organizations to design project prioritization models, I can say that it is rare to find that a probabilistic analysis makes a significant difference in the computations of project value or the way projects are prioritized, provided that the existing uncertainties relate to continuous risks only. Modeling project performance uncertainties of this type will cause project values to be slightly lower, particularly if the organization's risk tolerance is low (see Part 4 for an explanation of risk tolerances and the tolerances that might be considered low). Assessing and including uncertainties will always provide more understanding of project values and risks. However, I do not recommend including uncertainty in the project prioritization model if those uncertainties impact the accuracy of performance level estimates only, particularly if a project prioritization model will be the organization's first experience with a judgmentbased decision support systems. On the other hand, if there are discrete risks, especially the possibility of accidents that, if they occur, could lead to very serious outcomes such as a death or a very large financial loss to the organization, then my recommendation is that uncertainty and organizational risk tolerance be included in the model. Failure to account for organizational risk tolerance can impact the estimated value of projects that reduce such risks by several orders of magnitude. My risk demonstration is included for the purpose of making this point and encouraging organizations to include risk tolerance when prioritizing projects that reduce accident risks. It is essential in such instances that organizational risk tolerance be properly assessed and included in the model (see Part 4 for how to do this). Consequence ModelsI use the term consequence model to refer to a model component that appears in nearly every welldesigned project selection decision model. A consequence model is, as the name suggests, a model for estimating the consequences of conducting projects. The consequence model takes as its input project scores or whatever other estimates the user inputs to the model to characterize the project whose value is to be estimated. The outputs of the consequence model are the resulting performance levels or changes to performance levels that the value function or utility function needs to compute a value for the project. The project consequence model will be either deterministic, if uncertainties are excluded from the model, or probabilistic, if uncertainties are included. Simple versus Complex, JudgmentBased versus DataDrivenIn terms of computational complexity, a consequence model may be so minimal that it is hardly noticeable. On the other hand, the consequence model may be so large and computationally complex that it dwarfs the value function or utility function used in the model to convert consequences to the value of those consequences. (Each of these extremes is illustrated by an example below.) Regardless of the complexity of the consequence model, the model is essential in that it supplies the estimated impacts to organizational performance to be produced by conducting a project. Depending on the extent to which hard data are available for characterizing the relationship between projects and project consequences, consequence models fall along a continuum ranging from highly quantitative, datadriven approaches to mostly qualitative, judgmentbased approaches. Every approach has its own benefits and drawbacks, but you're not going to be able to obtain a model based on hard data if little or no hard data are available for predicting the consequences of your projects. Consider Using Influence Diagrams to Design Consequence ModelsInfluence diagrams are useful for identifying both the factors to be represented in the consequence model and the relationships among the factors that ought to be quantified [7, 8]. Software programs are available that partially automate the process of converting a qualitative influence diagram into a quantitative model for estimating decision consequences [9, 10, 11]. Benefit TimingIt might seem that you can eliminate the need to create a consequence model by having the model user simply input to the model estimates of the performance of each project with respect to each objective. Models based on simple scoring rules often operate in this way. However, projects typically create impacts that begin at some future point in time (e.g., after the project is completed) and end at some future point of time (e.g., when assets created by the project are retired or replaced). If timing and duration of benefits are relevant to determining project value, as they almost always are, then you will want a consequence model that accounts for these considerations when computing project value. Thus, project consequence models usually provide as output a time series indicating the estimated impact on the time progression of organizational performance. Such models can be regarded as providing a mathematical simulation of the real world, or more specifically, the part of the world impacted by projects. Example Model for Benefit TimingTo illustrate, suppose the decision is a choice by a government agency over alternative national health insurance programs. One objective for the decision might be to maximize the number of citizens who gain coverage under the program. The submodel within the consequence model for measuring the degree to which this objective is met might estimate the annual number of new enrollees in the plan over some time period, say 10 years, during which time the plan might be assumed to remain operational. A very simple submodel, shown in Figure 38, would be to assume the same number of new enrollees each year, beginning with the year the plan first becomes available. This time series of inputs would be used by the value model to measure the degree to which the project achieves, over time, the objective of maximizing the number of citizens who gain health insurance coverage under a selected program. This submodel within the consequence model for generating the time series of new enrollees requires just 2 inputs: the year the plan would first accept enrollees and the average number of new enrollees to the plan each year. Figure 38: A simple consequence model. Is this model adequate for providing one of the performance estimates needed for a health insurance program selection model, or should a more realistic model for the timing of new enrollees be used? Among other things, the answer depends on (a) whether accounting for the yeartoyear variations in enrollees makes much difference for the computation of plan value (the sensitivity of project value to consequence model assumptions can be investigated using sensitivity analysis) and (b) whether there are more realistic, wellaccepted model forms that could easily be used. To illustrate the latter possibility, an Scurve model is often used to represent market penetration for new products [12]. The Scurve model (Figure 39) fits empirical data that show that sales of new products typically follow three phases: introduction (where the product is beginning to be widely understood or appreciated), growth (when sales are increasing), and maturity (when sales level off). Figure 39: A more realistic, Scurve model. Being more complex, the Scurve model requires more inputs. In addition to the number of years until enrollment begins, the model requires: (1) the number of annual enrollments once the product has become wellestablished (the "saturation" level), (2) the year at which rapid growth initially begins ("hyper growth" time, assumed to be the number of years until sales reach 10% of the saturation level), and (3) the number of years over which fast growth continues ("takeover time"). Given these three inputs, the equation for the S curve will generate a time series estimate for the annual number of new enrollees similar to that depicted by the Scurve shape illustrated above. Consequence SubModelsTo build a consequence model for project selection, the analyst typically devises an equation or algorithm for estimating each of the model's performance measures. As a result, consequence models are often composed of interconnected submodels. The job for each submodel is to estimate one of the model's performance measures. If, as in the above example, the projects being prioritized are alternative national health insurance programs, then the Scurve for the annual number of new enrollees might be one of several submodels constructed for the consequence model. The sub models must be connected to ensure consistency in the assumptions that are common to each. In the above example, the new enrollee submodel establishes the number of people who, each year, will have coverage under the plan. Therefore, other performance measures that depend on the number of people covered by the plan must utilize the prediction provided by the newenrollee submodel. If, for example, there is a performance measure for the number of people who obtain improved health care, then that submodel must utilize the predicted numbers of people with coverage as specified by the newenrollee submodel. Likewise, any performance measures for the costs to insurance providers or the costs to the plan's enrollees must assume estimates for the number of people with coverage consistent with that produced by the new enrollee submodel. Constructing Consequence Models—Need for SubjectMatter ExpertiseAn obvious prerequisite for being able to construct a model for estimating how some system will respond to some proposed project is understanding of the system and the project. Continuing the above example, building a credible model for predicting the number of people who would enroll in a new health insurance plan requires participation from people knowledgeable about the health insurance market, including the attributes of a health insurance plan that determine the number of people who would purchase that plan. More generally, an effort to construct a consequence model for an organization's projects requires participation from "subject matter experts;" that is, people recruited from the organization with best understanding of the realworld systems within which the organization operates. For this reason, and because the analyst with the necessary mathematical model building skills may not have bestunderstanding of the relevant realworld systems, constructing a consequence model is typically a team effort. Constructing Consequence Models—Need for Mathematical Modeling ExpertiseIn addition to needing the realworld knowledge of the systems relevant to predicting project consequences, the modelbuilding team must be led by an analyst with the requisite mathematical and modeling skills. The basic mathematical building blocks for constructing consequence models are the same as those used in all other types of models. For example, a consequence model might include submodels that are:
The skills needed to create mathematical models are particularly difficult to teach. Traditional mathematical skills, such as familiarity with the above types of mathematical relationships, is certainly useful, if not a necessary qualification. However, having mathematical skills alone doesn't ensure that an individual will have the ability to create useful models. Professor Richard Smallwood, who for many years taught a course at Stanford University entitled "The Art of Mathematical Modeling," likens modeling to "sculpting." He believes the best way to teach modeling skills is by establishing a "studio," where aspiring modelers can watch how "masters" work, practice, and have the models they create critiqued [13]. In the absence of studios for aspiring modelers, my advise is to seek out consequence models that others have created for performance measures similar to those selected for your organization. Fortunately, existing submodels for many types of common project impacts are easy to obtain. In particular, the methods for estimating project financial impacts are wellestablished, and most organizations have defined a business case model for quantifying project financial performance in terms of net present value (NPV). Oftentimes, though, I've found that the organization's prescribed methods for evaluating the financial performance of an investment are designed for application to larger projects that are typically better defined than many of the organization's other projects requiring prioritization. What is often needed is a streamlined approach for estimating the impacts of projects on the organization's cash flows. Creating a simplified method for approximately estimating each project's present value of incremental cash flows will require coordinating with the organization's finance department to ensure that the streamlined approach incorporates assumptions (e.g., discount rates) consistent with those used elsewhere by the organization for financial analysis of investments. Other, easy to obtain submodels for common business objectives include models for estimating R&D success, the impact of project characteristics on customer purchase decisions, the effectiveness of projects at reducing pollution, the productivity and reliability of business assets, and so on and so forth. Methods for estimating health and environmental performance are also wellestablished, though creating submodels of these areas typically requires risk assessment expertise. Happily, for those who are not experts at creating mathematical models, the most common approach to building simple consequence models involves finding, selecting, customizing, and piecing together submodels that have worked well in other applications. Criteria for Selecting ModelsAs in other situations requiring a choice of a model type, selecting a type of model best suited to estimating project impacts on performance requires considering model strengths and weaknesses in turn. Criteria typically recommended for model selection include realism, capability, flexibility, ease of use, cost, and easy computerization [14]. Large, Complex Consequence ModelsConsequence models, like other kinds of mathematical models, can become quite large. Basically, the more capital that an organization spends on projects and the more complex the system determining project performance, the larger and more sophisticated the consequence models are likely to be. In my experience, some of the largest and most complex consequence models are being used by transportation agencies, pharmaceutical companies, oil and gas companies, automotive manufacturers, and, especially, government agencies that fund environmental cleanup projects. In the case of transportation projects, for example, models are available for predicting the impacts of transportation system projects on traffic congestion, travel times, and pollutant emissions. Transportation models may simulate the paths of individual vehicles along a particular road, or operate at a more macroscopic level, representing, for example, the speed, flow and density of traffic in the various sections of a transportation network. Figure 40: Simulated contaminant transport paths for the NTS. To provide an example, the most complex consequence model I've encountered for prioritizing projects was a threedimensional contaminant transport model developed for the Nevada Test Site (NTS) [15]. The NTS is the location where the U.S. conducted more than 800 tests of nuclear weapons. The tests left massive underground deposits of radioactive material, including carbon14. cesium, plutonium, and tritium. The purpose of the priority system was to select projects for a $212 million dollar project portfolio for reducing contaminant transport uncertainties at the site, including uncertainty over the upper bound (95% confidence) estimate of the "contaminant boundary," defined as the maximum distance radioactive contaminants will travel over the next 1,000 years [16]. To prioritize projects, each project's expected reduction in the upper bound estimate of the contaminant boundary was computed [17]. This required a Bayesian analysis with Monte Carlo simulations of contaminant migrations used to generate probability distributions over the location of the contaminant boundary. The plot in Figure 40 to the right shows some of the simulated contaminant pathways generated using the consequence model. The team, including me, so underestimated the time required to do each simulation that it was necessary, on the weekend before the prioritization deadline, to commandeer all of the computers in the Nevada office of the prime contractor and run them simultaneously and continuously to produce the required consequence simulations. Why Organizations Favor Using Consequence ModelsOrganizations would not be using sophisticated project consequence models if they did not perceive benefits from doing so. The main benefits cited for constructing and using consequence models are:
I apply a simple ruleofthumb for answering the question of whether a model should be created to simulate the consequences of projects: If modeling makes specifying project consequences easer, more accurate, or produces greater confidence, then, by all means, consider constructing a model. BootstrappingAlthough there are often good arguments for creating sophisticated consequence models, creating overlycomplex, difficult to understand models is a common failure mode for priority system design. An important consideration for the development of a consequence model is the need to ensure that the individuals within the organization who use the model, especially decision makers, fully understand the model and are comfortable with its level of complexity. One often recommended approach, sometimes referred to as bootstrapping, is to develop first a highly simplified and easily understood analysis that omits all but the most significant considerations [18]. Once the decision maker has gained confidence with the simple model, additional detail and sophistication can be added. However, even when a decision maker is supportive of increasing model sophistication, the understanding gained from further modeling can sometimes decrease if models become unnecessarily complex. The above referenced model for selecting information gathering projects for the NTS provides an example of exactly this possibility. An earlier prioritization of information gathering projects for a different segment of the NTS used a simpler version of the contaminant transport simulation model [19]. In that case it was fairly easy to understand the connection between the characteristics of a proposed project (e.g., drilling a groundwater monitoring well in a given location) and the resulting changes to posterior probability distributions for the contaminant boundary. The subsequent analysis, described above, used an expanded model for contaminant transport that included simulation of pathways within the vadose zone (above the groundwater table) [20]. With this more complex model, it was difficult for many members of the project team to understand the connection between specific projects and the corresponding posterior distributions for the location of the contaminant boundary. Using oversimplified models, on the other hand, especially those that omit important aspects of the problem needed to provide good predictions, can also limit and confound understanding. The goal for model design is to hit the “sweet spot” of model sophistication, regardless of whether complexity is added sequentially to an overly simple model or collapsed from an initial, more highly detailed model [18]. References
