Once you have a project-selection decision model, it is easy to specify metrics for computing project value. The desired metrics are "observables" (discussed below) that influence the model's value drivers; that is, those project characteristics and impacts (i.e., model inputs and other parameters) that have the greatest influence on value. Some authors call the metrics obtained in this way "performance measures", to recognize the special characteristics that make them well-suited for measuring project performance relative to generating value. Metrics for measuring project performance typically include forward-looking financial metrics, like NPV, but also factors and considerations on value paths that don't directly impact cash flows. Building a decision model leads to metrics that capture the variety of ways that projects contribute value.
In my experience, a well-designed decision model will identify metrics for some or all of the following types of value:
As indicated by the above examples, performance measures, generally speaking, are metrics that characterize the potential benefits available from projects. To fully characterize those benefits, it is typically necessary to include metrics that indicate timing, that is, when the project benefits are likely to occur and how long they will persist, and, oftentimes, risks (e.g., the likelihood that the project will actually produce its anticipated benefits). Thus, the top-down approach wherein metrics are derived from a project-selection model often leads to a large number of metrics. This outcome conflicts with standard advice for choosing criteria for project prioritization, where authors caution against using too many. Avoiding numerous criteria makes sense for the common scoring model, since the criteria defined through a bottom-up approach will tend to overlap and double count—having more criteria increases the assessment burden without improving accuracy. However, a well-designed decision model will ensure that the metrics so obtained represent distinct sources of value. If metrics essential for representing some sources of value are omitted, the value of projects will be underestimated. Furthermore, there will be a bias against doing those projects that provide the types of value that are not captured due to the omitted measures. When metrics are obtained from a decision model, the model defines the algorithm that allows the value of a project to be computed and expressed in dollar terms.
Obviously, there is a practical limit to how many project performance measures should be used. The 80/20 rule applies. The goal should be to include the minimum number of metrics necessary to roughly capture every significant source of project value, not numerous metrics that more completely capture just a subset of components of value. In other words, don't make the mistake of defining multiple measures for capturing things that are relatively easy to address (like financial value), while omitting measures for something that may be important but hard to address (like impact on learning and capability). Since few if any projects will provide significant contributions under each type of value, having lots of measures doesn't necessarily create a significant burden for evaluating proposed projects. Estimates need only be provided for the small subset of measures that are relevant to capturing the specific motivations for doing that project.
Metrics as "Observables" and the Clairvoyant Test
To the extent possible, metrics should be observables; that is, characteristics of projects or project outcomes that can be observed and measured in the real world. Since estimating project value requires forecasting the future, metrics don't, obviously, all have to be things we can observe today. Metrics can, for example, include a projected future state of some observable, for example, an improvement in a reliability-of-service statistic important to customer satisfaction.
A useful device for checking whether a metric is observable is the so-called "clairvoyant test" devised by my college mentor, Professor Ron Howard. Before accepting what appears to be a good metric, consider whether a clairvoyant could give an unequivocal value for that metric given that a project decision is made in a specific way. Oftentimes, the clairvoyant test points out inexactness of what initially appears to be a well-defined metric. For example, "customer satisfaction" doesn't pass the clairvoyant test. However, "percent reduction in recorded customer complaints" and "company ranking in the next industry customer satisfaction survey" are metrics that do pass the test.
Metrics that don't pass the clairvoyant test are vague. They create inconsistency and imprecision when used for estimating. More importantly, if the metrics are not observables, they cannot be monitored so that actual values can be compared against estimates.
The traditional financial metrics should be used to determine the direct financial components of project value. Project investment cost is, of course, an important financial metric for any project. Projects that impact operations (e.g., projects that create new revenues or that affect future operating costs) produce downstream financial impacts that must also be evaluated. Thus, any and all significant, incremental, period-by-period cash flows that are anticipated to result from projects should be estimated, either as a most-likely or average case or in the form of alternative scenarios. The organization's standard accounting model may then be used to determine the resulting after tax, or unencumbered free cash flows, which may be used to compute a project's financial NPV.
Some important principles for estimating financial value in support of project prioritization include:
Be suspicious of long-term, positive NPVs. Keep in mind the economic axiom that excess profits (the source of positive NPV) must be zero in a perfectly competitive market. A long-term, positive NPV requires some sustainable competitive edge—being first, being the best, or being the only. Retaining that edge indefinitely would require some barrier to the entry of competitors. Consider carefully how long it will take competitors to catch up and drive profits back down.
Metrics Provide Justification for Tough Choices
One of the most under-appreciated benefits of having good metrics linked to a defensible decision model is improved justification. Author Anthony O'Donnell quotes a portfolio manager at an insurance company that implemented a portfolio management tool: "People would come to me and ask me to do a particular project...I would tell them I couldn't fit it in, but had a hard time articulating why." Metrics now allow him to give concrete reasons for turning away projects. "Their satisfaction immediately went up, and I still didn't do their projects!".
Each Organization Needs Its Own Metrics
Different organizations conduct different types of projects. The metrics for evaluating new product investments by a software vendor, for example, will be different than the metrics needed to evaluate process improvements for a company operating an oil pipeline. Also, different organizations create value in different ways. An electric utility, for example, creates value differently than does a ballet school. Some organizations will seek to maximize shareholder value, while others will want to value impacts to other stakeholders as well. Thus, each organization will have a different model for how its projects create value and, therefore, will want to use different metrics. There is no one set of project metrics that works for every organization. However, in all cases, good metrics provide a means for computing the value added by projects. Good metrics are observables. And, they are sensitive to project decisions so that they may be used to differentiate the value of alternative project portfolios.
Smart Metrics Decouple Performance Assessment from Project Valuation
A common project prioritization problem is what to do if the magnitude of the benefit available from a project depends on the other projects that are included in the portfolio. Here's an example. The US Department of Energy (DOE) is funding a portfolio of projects designed to increase electricity generation from alternative fuels (fuels other than fossil fuels). With a simple rate-and-weight project prioritization tool, the DOE could "score" the benefit of, for example, a solar project, but the amount of benefit delivered by the solar project would be less if the project portfolio includes investments in nuclear (because nuclear would likewise increase generating capacity, making the additional incremental capacity from solar less valuable). It seemed that it would be necessary to score various combinations of solar and nuclear investments, but this would be very time consuming and make it difficult to maintain consistency.
The solution is to choose a performance metric that depends only on the project in question. The obvious such metric for this case is the amount of new generating capacity delivered by the project. The value of a increment of new capacity can then be computed based on the total available capacity, accounting for the incremental capacity delivered from all other projects contained in the candidate portfolios. This approach requires that the tool provide the capability to apply non-linear value models and identify the optimal portfolio via optimization (capabilities that simpler tools often lack), but the result is an evaluation approach that requires far fewer and simpler inputs. As illustrated, smart metrics (that is; good modeling practice), often provides a way to address what might otherwise appear to be impossibly complex project interdependencies.
The Right Metrics Turn Project Proposals into Performance Contracts
Deriving metrics from a decision model ensures that the organization is seeking the right information about proposed projects; namely, the information necessary to estimate the value to be derived by the organization if the project is conducted. The value of the project can then be compared with its cost, and the resulting "bang for the buck" compared with similar estimates for other candidate projects. This provides a sound basis for making project-selection decisions.
If, in addition, the metrics are observables, the organization further benefits in that project proposals serve as performance contracts. In return for a chance at obtaining a share of the organization's limited resources, project proponents indicate in the clearest and most relevant terms what they expect the project will accomplish. Project results and impacts can then be tracked and compared with the original estimates. Performance contracts document the terms of the agreement, protecting both parties to the contract. Framing the project as a performance contract creates a healthy shift in perspective. Instead of choosing which projects to cut, the focus is on deciding what project opportunities to purchase.
Due to uncertainty, project outcomes may not exactly match forecasts. Thus, what the implicit contract requires is not that project managers invariably be held responsible for achieving all of the performance indicated by their estimates, but that any significant deviations between estimates and actuals be explained. Over time and on average, some projects should exceed expectations while others will fall short. In the meantime, the organization can learn to improve forecasts by tracking and better understanding the uncertainties that are involved.
In situations where the uncertainty is considerable, it may be useful to separate metrics for indicating the benefits that can be expected from benefits that are more speculative and uncertain. The "expected" benefits then become the basis for the performance contract and the speculative benefits can be appropriately discounted based on risk (see Part 5).