Lee Merkhofer Consulting Priority Systems
Tools for Project Portfolio Management

Project Portfolio Management Tools:

Which Approach is Best?

Part 6: Evaluating Tools

Part 1 identified available tools for project portfolio management (PPM), Part 2 described key differences, and Part 3 summarized costs and risks. Part 4 identified inability to optimize the project portfolio as the weak link for most tools. Part 5 described the components of tools, including the decision model. This part provides a framework for evaluating tools.

How should you evaluate candidate project portfolio management tools? Don't make the mistake of choosing a tool based solely on a comparison of tool features. Instead, seek a tool that will help your organization make better portfolio decisions. Six criteria are relevant for evaluating a tool's decision model: (1) accuracy, (2) logical soundness, (3) completeness, (4) practicality, (5) effectiveness, and (6) acceptability [1]. The subsections below clarify these considerations.

The Tool Must Be Accurate

The value of a PPM tool depends, most obviously, on its ability to produce accurate outputs, including recommendations. Important questions include: Can the tool be counted on to produce reliable estimates? Is it biased toward or against certain projects, interests, or considerations? Are results highly sensitive to untested or untestable assumptions? Does the tool produce outputs with an acceptable level of confidence and precision? Does the tool indicate the confidence level or precision associated with outputs?

Flaws and omissions in decision models can produce large errors in recommendations. As evidence, see the examples in the boxes below.

Unfortunately, inaccuracies in recommendations often only surface after many applications of a tool. It is difficult to collect empirical data for validating a tool's recommendations (few organizations are willing to fund recommended and not-recommended projects for the purpose of testing a tool!). Tool providers may have some evidence, but they will naturally emphasize the positive and minimize (or ignore) the negative. The data they have may not be representative of the particular tool configuration or types of projects relevant to your organization.

Conducting a pilot test before fully committing to a tool is essential. Choose a variety of projects with different characteristics and see what recommendations the tool makes. Be skeptical of any odd patterns, such as projects with certain characteristics consistently being ranked either high or low. Resist the natural temptation to rationalize the results ("garbage in, gospel out"). Drill down to fully understand why the results come out the way they do.

Since extensive tool testing is usually impossible, assessments of accuracy must be based, at least in part, on a detailed evaluation of how the tool's decision model works. In this regard, two other considerations are useful. As described below, the tool must be logically sound and it must be complete.

The Tool Must Be Logically Sound

From a practical standpoint, the logical defensibility of a tool is primarily a function of the degree to which it can be justified by accepted and proven theory. If a model can be shown to faithfully implement accepted theory, more confidence can be attributed to the predictions and recommendations made by that model. Conversely, if a model is not grounded in theory, it will almost certainly not consistently produce reliable predictions.

What exactly is a theory? A theory is a logic predicting what outcomes will result from specified actions and why. The "why" is important. A theory must explain cause and effect. To do this, a theory contains an explanation of how things work. A theory is proven by showing that its predictions consistently match observations.

To illustrate the importance of theory, consider initial attempts at manned flight. Observing birds, early researchers thought that they could fly if they strapped feathered wings onto their arms and jumped off cliffs. Many animals fly by flapping feathered wings, but this is not the fundamental reason that they can fly. Manned flight only became possible after Daniel Bernoulli developed theory explaining how airflow around objects can produce lift.

Ranking Projects Based on Strategic Alignment is Not Logically Sound

As described in the paper Mathematical Theory, ranking projects based on the ratio of benefit-to-cost ("bang for the buck") can in some circumstances be a reasonable prioritizing technique (provided that there are no project inter-dependencies, e.g., infrastructure projects and projects that use that infrastructure). Choosing projects that implement organizational strategy is important. However, strategic alignment is not a surrogate for "bang-for-the-buck." Therefore, the approach will not necessarily lead to value-maximizing project portfolios.

Many models are like the feathered wings of early would-be aviators. Instead of being derived from theory, they are heuristics—rule-of-thumb relationships based on observations that certain characteristics tend to be associated with certain outcomes. No explanatory, cause-effect reasoning is provided.

For example, a popular heuristic for constructing project portfolios is balance. Project portfolios of successful companies often balance low-payoff "sure things" against high-payoff gambles. But, balance is not the fundamental characteristic that makes a project portfolio successful. Rather, balance is something that tends to result when the best choices are made based on needs and the nature of available options. Simply choosing a portfolio that happens to be balanced does not ensure success.

As noted previously, relevant theories for selecting and prioritizing projects include decision analysis [5], multi-attribute utility analysis (MUA) [6], modern portfolio theory [7], portfolio optimization theory [8], and real options [9]. These theories are well established within the technical and academic communities and have been proven in many real-world applications.

Decision Analysis

Decision analysis is a theory and collection of associated methods for making decisions under uncertainty. The approach involves constructing and analyzing a model of the decision problem to identify the choice, or sequence of choices, leading to outcomes most consistent with the preferences of the decision maker.

Modern Portfolio Theory

Modern portfolio theory is a theory of decision making that seeks to construct a portfolio of investments offering maximum expected returns for a given level of risk. The theory quantifies the benefits of diversification as a means of reducing risk.

The above theories are sometimes termed "axiomatic theories." Each begins with a set of axioms (assumptions, or hypotheses) about how things work, for example, about how people or organizations ought to value and choose projects. These axioms can be accepted or rejected based on observations or other evidence. The theory then derives (through mathematical "proofs") conclusions that follow from its axioms. If you can demonstrate that the axioms are acceptable (and the math is correct), the rest of the theory follows. See the side box for an example.

By the way, the fact that there are multiple theories does not mean that you will get different answers depending on the theory that you choose to apply. The well-established theories for selecting projects typically give the same answers, provided that each theory is able to address all the relevant factors, the theories are properly applied, and the assumptions for the analyses are the same. The situation is analogous to that for theories in other fields, such as physics. For example, the mathematics for Newton's laws of motion and Einstein's theory of relativity look very different, but both theories predict the same trajectories for everyday objects under everyday conditions. The predictions only differ in situations (e.g., speeds close to the speed of light or extremely small objects) that invoke considerations that aren't addressed by Newton's much simpler laws of motion. Likewise, decision analysis and real-options, for example, look very different, but they give exactly the same answers to any problems that both can fully address provided that each theory is correctly applied [11]. Nevertheless, the choice of the theory is critical. Choosing a solution approach based on ill-suited theory may make it extremely difficult, or impossible, to satisfy the necessary assumptions for the theory (e.g., it may not be possible to provide the required inputs). Thus, the wrong theory can make it impossible to obtain useful and meaningful answers.

Rather than cite the theories on which their tools are based, most providers merely reference the analytic techniques that their tools employ, such as balanced scorecards, strategic alignment, decision trees, linear programming and Monte Carlo simulation. Such techniques, by themselves, provide no explanation for why recommended portfolios should be preferred. Instead, they merely describe or refer to mathematical calculations that are performed. For example, balanced scorecards typically use an equation that weights and adds scores assigned to projects. Projects are ranked based on total weighted score. There is no reason why this should lead to the identification of preferred projects.

Weighted scoring techniques can sometimes be used to effectively apply appropriate theories, but only if the scoring scales and weights are structured to match the requirements of the theory. For example, MUA theory for valuing projects can sometimes be implemented using scorecards and a weight-and-add equation. However, for a weight-and-add equation to work, the metrics that are scored must meet a condition known as additive independence, scaling functions must be assigned so that the computed performance measures are proportional to value, and the weights must quantify the value of specified improvements against objectives (weights are often assigned using an assessment technique known as the "swing weight method" [13]). Unless these conditions are met, the aggregated score will not measure value and will not serve as an indicator of decision-maker preference.

Thus, the use of a weighted scorecard does not in any way ensure that the requirements of any accepted theory are being met. The defensibility of the recommendations made by any decision model depends on whether the techniques used to value and prioritize projects are consistent with some defensible theory and on how faithfully the model implements the requirements of that theory.

As an example of the dangers of using tools not based on sound theories, see the side box example describing the Department of Energy's initial attempt to rank potential sites for a nuclear waste repository. Logical defensibility is particularly important when using a tool to help make controversial decisions. Although most project decisions aren't as controversial as nuclear waste, some (such as an electric utility's decision to acquire right-of-way to construct a transmission line) can be.

Using a logically sound approach avoids errors associated with unsound methods and reduces the risk of successful challenges to the credibility of decisions. Although logical soundness does not guarantee accuracy (see below), it is safer and wiser to use a tool based on sound theory than one that merely "seems" reasonable.

The Tool Must Be Complete

Being complete means accounting for all significant and relevant considerations. A logically sound tool that is incomplete gives, at best, the right answer to the wrong question. A tool might leave out important considerations because those considerations are difficult to accommodate (e.g., it may be too hard or too costly to obtain the necessary input data) or because they are impossible to include given the nature of the selected decision model (e.g., dynamic considerations within a static decision model).

As noted earlier, project decisions tend to produce broad, enterprise-level impacts. This makes creating a complete model challenging. As illustrated by the example in the side box, if the decision problem is extremely complex, a complete model may need to be large, requiring many inputs and sophisticated mathematical algorithms.

One way to assess the completeness of a model is through sensitivity analysis. Vary the description of a complicated project in ways that should logically affect the attractiveness of that project. Do the inputs permitted by the model allow you to reflect such considerations? Do the priorities established by the model behave as they should? No model can capture everything, but a complete model should properly address those considerations most important to project selection decisions.

The Tool must be Practical

Accuracy, logical soundness, and completeness are the reasons that prioritization tools must often include sophisticated decision models. But, a tool must also be practical. Building a quality tool that is still practical is a significant challenge facing expert tool designers.

To be practical, users must have sufficient expertise to understand and apply the tool. The required inputs must be available. Computational resources must be adequate, and tool applications must be completed within available time. All this must be accomplished without unacceptably degrading the accuracy, logical soundness, or completeness of the model.

How can such challenges be addressed? Obviously, skill is required and experts should guide the development of the decision model used by the tool. At the same time, the organization can take steps to make it practical for it to use more sophisticated tools. These steps can include investing in internal training, reassigning responsibilities, developing new sources of data, and adjusting budgeting schedules.

Although the above recommendations may seem daunting, it is worth noting that it is common for an organization to initially view a quality tool as "too complex," and then later, after gaining experience and comfort with its use, to want to expand the tool and make it even more sophisticated (see the above side box example). Thus, organizations should avoid simplistic tools and tools whose decision model cannot be improved as experience and understanding grows.

The Tool Must Be Effective

A PPM tool cannot be effective unless it fits the way that the organization actually makes decisions. The considerations to which the tool may need to be sensitive include:

  • Timing. When are decisions made? Annually, quarterly, monthly, or "just in time"? Once a decision is made, is the commitment over multiple years, or can it be revisited at any time?
  • Decision process. How are choices made? Through consensus? A dictatorship? Who makes decisions? The Board? A multi-functional steering committee? A project portfolio manager?
  • Project management. How are needs identified, candidate solutions conceptualized, and a preferred solution identified? How are projects planned, scheduled, and staffed? If a project stage gate process is used, how are decisions made about whether projects proceed through each gate?
  • Performance monitoring. What information is available to help the organization track its performance? What types of performance data are collected?

Such considerations must be understood and the tool and its application process must be designed to ensure that there is a good fit to the organization. See the sidebox for an example wherein the approach needed to be modified to accommodate the organization's decision-making structure.

Being effective also means achieving the specific goals that motivate using a tool. For example, a tool intended for back-room use would be designed differently than one whose purpose is to demonstrate to regulators and local citizens that the organization's project decisions are in the best interests of the community. Not only would the latter have different user characteristics and features, the definitions of project benefits would be quite different.

Before building or purchasing a tool, think carefully about what the tool needs to do in order to be effective. See the second side box for another example of an application requiring a specialized approach. Although a tool may appear to offer lots of flexibility, it can only be adjusted within the limitations dictated by its underlying decision model. Few tools allow users to change the mathematical logic by which projects are valued or optimal portfolios are identified.

The Tool Must Be Acceptable to Stakeholders

A tool that is practical and effective is not always acceptable to decision makers and other stakeholders. An acceptable tool must be compatible with organizational processes and culture. It must be understandable and understood. A tool that impacts funding decisions will be perceived as a threat to some interests. All key stakeholders must have confidence that the tool will help them, as well as the organization, to succeed.

Gaining adequate acceptance is critical to the success of the tool. For a dramatic example of a tool that was widely regarded as technically defensible, complete, accurate, and practical and, yet, was rejected, read "The Rise and Fall of a Risk-Based Priority System" [15].

The most effective way of generating acceptance is to involve in the design effort those who will use and be impacted by the tool. A collaborative process helps ensure that the tool will have exactly those characteristics necessary to best suit the organization and its needs. Equally important, involving stakeholders in the design of the decision model creates buy-in by allowing skeptics to express their concerns and see first hand how those concerns are addressed. See the side box for an example.


  1. These criteria are discussed at greater length in M. W. Merkhofer, Decision Science and Social Risk Management, Reidel, 1987, and V. Covello and M. W. Merkhofer, Risk Assessment Methods, Plenum, 1993.
  2. R. G. Anderson, A. Bendure, S. Strait, and A. Kann. "Supporting Documentation: Laboratory Integration and Prioritization System," Los Alamos National Laboratory, Los Alamos, New Mexico, 1994.
  3. See, for example, Ward Edwards and J. Robert Newman, Multiattribute Evaluation, Sage University Papers Series on Quantitative Applications in Social Sciences, Beverly Hills CA 1982. .
  4. The analysis and results are documented in H. Call and M. W. Merkhofer, "A Multi-Attribute Utility Analysis Model for Ranking Superfund Sites", published in "Superfund '99, The Proceedings of the 9th National Conference," Washington, D. C., November 28-30, 1988.
  5. Robert T. Clemen, Making Hard Decisions: An Introduction to Decision Analysis, PWS-Kent Publishing Company, Boston, 1997.
  6. Ralph L. Keeney and Howard Raiffa, Decisions with Multiple Objectives, Wiley, New York, 1976.
  7. For example, E. J. Elton and M. J. Gruber, Modern Portfolio Theory and Investment Management, 4th ed., Wiley, New York, 1991.
  8. For example, R. K. Sundaram, A First Course in Optimization Theory, Cambridge, 1996. Also see Mathematical Theory for Prioritizing Projects and Optimally Allocating Capital, located on this website.
  9. For example, T. Copeland, V. Antikarov, and T. E. Copeland, Real Options: A Practitioner's Guide, Texere, 2001.
  10. J von Neumann and O. Morgenstern, Theories of Games and Economic Behavior, Princeton University Press, 1947.
  11. James E. Smith and Robert F. Nau, "Valuing Risky Projects: Options Pricing Theory and Decision Analysis," Management Science 41 (4) 1995.
  12. For a complete description, see M. W. Merkhofer and R. L. Keeney, "A Multiattribute Utility Analysis of Alternative Sites for the Disposal of Nuclear Waste," Risk Analysis, Vol. 7, No. 2, 1987, 173-194.
  13. D. von Winterfeldt and W. Edwards,, Decision Analysis and Behavioral Research, New York: Cambridge University Press, 1986.
  14. K. E. Jenni, M. W. Merkhofer, and C. Williams, "The Rise and Fall of a Risk-Based Priority System: Lessons from DOE's Environmental Restoration Priority System," Risk Analysis, Vol. 15, No. 3, 1995, 397-409.
  15. E. Martin and M. W. Merkhofer, "Lessons Learned - Resource Allocation based on Multi-Objective Decision Analysis", Proceedings of the First Annual Power Delivery Asset Management Workshop, New York, June 3-5, 2003. Also see, A Priority System for Allocating and O&M Budget, located on this website.