The most important step in project portfolio management (PPM) is making the right project choices, especially, choosing which projects to fund and which not to fund . Managers and their organizations are facing increasing internal and external pressures to cut costs while being more effective at meeting changing demands within narrowing windows of opportunity. In many organizations, getting the most from the project portfolio requires regularly adding, changing, and removing projects in response to project progress (or lack of progress) and the evolving business enviornment. In some organizations, conducting just the projects labelled "must do's" would require more resources than the organization can provide . Given today's level of customer expections and mounting business competition, making the wrong project choices and ineffectively using limited resources can threaten the very survival of the organization.
In order to make the best use of limited resources, the project portfolio management office must determine which projects to initiate, which on-going projects to continue to fund, which projects to revamp, and which projects to kill. Since there's normally a delay before for the benefits of projects are realized, needs must be anticipated and actions timed and sequenced in order to ensure sustained success. As illustrated in Figure 27, the goal of the PPM office is to manage the project pipeline by making optimal choices at each stage of the project lifecycle .
Figure 27: The Portfolio Management Office manages the project pipeline.
Since organizations rarely have sufficient resources to conduct all available projects, projects must be prioritized. In the absence of a formal project selection decision model, by default managers use forced ranking. Forced ranking, once again, simply means that managers get together and "force" each project into a strict priority ordering (or into a number of priority groups). Projects are then added to the portfolio in rank order until the organization runs out of resources. Projects that fall below the threshold, are put on hold or killed outright. Considerations that apply at the portfolio level, such as project synergies and portfolio risk, may be used as modifiers to the project-by-project ranking. Needless to say, forced ranking, as well as the final choice of which projects to conduct, are difficult decisions. In the absence of a more formal approach, bias, mental errors, and politics often play a major role in project selection .
As I argued previously, individuals knowledgeable about the projects under consideration have the ability to separate high priority from low priority projects. I've seen ample evidence that a PPM team, using the right processes, can reliably produce a rough priority ranking of projects. Forced ranking becomes more difficult and time consuming, however, if there are:
Qualitative Project Prioritization Methods
In nearly all instances, but especially so under the above complications, a key challenge for the PPM team is reaching agreement over how projects should rank. Thus, many PPM teams use formal, mathematical approaches to aid the prioritization process. The best of the formal approaches, in my opinion, involve the estimation of project value based on quantitative analysis, the subject of the Part 3 of this paper. However, qualitative approaches are also being used by PPM teams to prioritize projects. Popular qualitative methods include paired comparisons, the Q-sort method, virtual markets, and point scoring. Variations of these approaches can make it easier to involve larger groups in the prioritization process. Qualitative methods have retained a foothold because they are easy to use and easy to understand. Also, quite frankly, a certain fraction of managers remain uncomfortable using quantitative methods, especially methods involving mathematical operations beyond addition, subtraction, and multiplication.
The main advantage of the paired comparison approach is that it is relatively easy for people to express preferences for a choice involving just two alternatives . Paired comparison works well so long as there aren't too many projects.
Start by preparing a one-page data sheet summarizing each project. The summary should provide a short name for the project, a brief description of what it involves, estmated costs (in terms of dollars, hours required, etc.), and a listing of the project's presumed benefits. Make enough copies of the data sheets for distribution to each participant in the prioritization process. Also choose some means for directing the attention of the group to selected pairs of projects. For example, you could list the project names with corresponding costs in rows of a spreadsheet and use a projector to display pairs of rows on a screen. Alternatively, you could simply write the name of each project on a separate index card or Post-it® sticky, and hold up two of the cards at a time. Ask, "Which of these two projects is higher priority?"
Key to this step is encouraging participants to consider project costs as well as project benefits. As I illustrated back in Figure 13, projects should be prioritized based on the ratio of value-to-cost, assuming projects are independent of one another. Considering project value alone produces the wrong ordering. Thus, if the resource requirements of the projects differ significantly, you may need to remind participants that higher priority should be assigned to the project with the higher ratio of value-to-cost, which may not be the project with higher value. It may help to ask: "Which project provides the greater bang-for-the-buck?"
Initially, some people typically will be reluctant to express their opinions. In the interest of time, you'll need to encourage quick responses, so don't waste time with discussion if people agree. If there are disagreements, encourage explanations for the different views. Relevant questions include: Will the project make a temporary or lasting difference? Is there much risk that the project will fail to achieve its goals? How urgent is the project, would its effectiveness decline significantly if we delay it? Again, though, try to encourage people think about project value relative to cost. Make sure there is someone knowledgeable about each project to explain and answer questions in case others are unclear about exactly what the project will do. When people who disagree start repeating arguments, call for a vote to avoid getting stuck.
Once the first two projects have been compared, put the higher priority project at the top of the list and the other one below it. Now take a third project and compare it in turn to each of the two previously considered projects to determine where it should be placed. Continue until the group has compared enough of the projects to determine how they order, at which point you've got a prioritized list.
The main limitation of paired comparison is that it takes a lot of time if there are more than a few projects, especially if participants don't quickly agree on the pairwise comparisons. The Q-sort method is a popular, relatively quick way to involve each member of a small committee in the prioritization process .
With Q-sort, prioritization is conducted as a series of project sorts conducted individually by each particpant. The simplest and most popular version of the Q-sort proceeds as follows:
The q-sort works best with a small number of participants, but gets less efficient if there are more than about five people involved. The individual priority assignments may be carried out annoymously, before the meeting, or in real time at the meeting.
Like pairwise comparison, the q-sort depends on the participants having a complete and impartial understanding of each project and its effectiveness. If participants do not have an equally good understanding of all issues important to every project, basing priorities on the popular vote that underlies the Q sort is not likely to produce an optimal project portfolio.
A limitation of paired comparison and the q-sort method is that they both fail to capture information on strength of preference; that is, the degree to which one project is preferred over another. A virtual market provides a mechanism for obtaining judgments indicating strength of preference . In this approach, participants bid for projects, and the bids are used to establish priorities. The exercise can be structured as a game, which increases interest and involvement. The process works best when there are no more than about 20 projects, though playoffs can be held using multiple teams of participants each bidding on a subset of projects (see below).
Virtual money for project bidding
As with paired comparison, each project has a name, description, cost, and outline of its benefits. The players are given "virtual money" for bidding. In the usual version of the game, the amount allocated to each player is set roughly equal to the total project budget, if known. If the budget is not known, the amount is set to some amount less than the total cost of doing all projects (e.g., 60%).
Each "player" bids on each project; that is, indicates how much he or she is willing to spend to obtain the anticipated benefits of conducting the project. Project costs provide information for better understanding the amount of effort involved and what, therefore, a project might accomplish, but cost should not otherwise be a factor for estimating value and, therefore, what one should bid. (You wouldn't, for example, value a prized antique less simply because you got it "free" as an inheritance.) To determine what to bid, the participant must judge the worth of the project's consequences and then allocate his or her limited virtual money accordingly. Because people get lots of practice making shopping decisions, framing project evaluation as bidding makes it familiar. By habit, players put themselves in the future perspective and ask, "How much are the project benefits worth?"
Assuming the players are charged with making decisions on behalf of their organization (e.g., the players are the PPM team), each should take the organization's perspective when determining what to bid. Discussions among the players can be useful and often lead to players increasing or decreasing bids for specific projects based on the arguments made by others.
A project ranking is created for each player. The ranking metric in each case is the ratio of the project bid to project cost. A collective project ranking is created based on averaging the bids across players and dividing by cost. The collective ranking is assumed to represent the group's project ranking.
Variations in the game can be used to promote different behaviors and levels of collaboration for the participants. For example, if you want to be sure that sub-portfolios consisting of groupings of certain types of projects receive some level of funding, split projects into groups and conduct the prioritizations for each group separately. To encourage participants to try to value the projects from a group perspective, you can tell participants that the game "winner" will be the player whose individual ranking comes closest to the group ranking. There are a number of common statistics for comparing the "closeness" of rankings that can be easily calculated with Excel, including the Spearman correlation, which compares ranks and Pearson correlation coefficient, which compares the metrics used to compute the ranks.
In the above, the collective ranking metric is the ratio of the average project bid to project cost because each player is presumed to be providing one estimate of the value of the project to organization. Suppose, though, that it is desirable for the players to represent distinct interests. For example, an industry sponsored organization may use dues paid by its member companies to conduct projects deemed useful to its members. In this case, the members are the players and each bid indicates the worth of the project to that specific member. The collective ranking metric should then be the sum of players bids divided by project cost.
If you want to encourage participants to negotiate, share viewpoints, and take more ownership in the selected project set, allocate less virtual money to each player so that they must pool their money in order buy projects (e.g., if there are N players, each player gets 1/N times the available budget). Projects make the accepted list only if they receive team bids at least equal to their costs. To handle large numbers of projects (and cases where not everyone has full understanding of all projects) tournaments may be held in which teams purchase projects from subsets of the full project portfolio. The purchased projects from each subset advance to compete against projects selected by other teams, ultimately producing a collection of "winning" projects. To reduce the dependence of results on the membership of each team, each project should compete in at least two separate competitions until the final stage of the tournament.
The most commonly used method for quickly capturing preferences for projects is point scoring . Each participant assigns points to each project, such that the number of points assigned indicates the individual's view of the project's priority. If there are a large number of projects, they can be partitioned into groups of similar projects, as described above for virtual market prioritization. Once again, to produce a priority ranking, the points/scores assigned to projects should represent judgments about the worth of a project for the resources required.
A scoring scale (e.g., 0-to-10 scale) can be used with definitions associated with each score (e.g., 10 means "exceptionally good use of resources, no weaknesses"). After a few projects have been scored, you can select some to serve as benchmarks to make it easier (via pairwise comparison) to score the others. To obtain the group ranking, simply add or average the scores assigned to each project by the participants.
Be Careful Using Scoring Models
It's much easier for people to assign points to projects based on judgments about project value rather than the ratio of value-to-cost. Consequently, many organizations are using improperly designed scoring approaches. Across the internet you can find many recommendations for multi-criteria scoring systems that involve defining various criteria relevant to judging projects and specifying simple scoring scales for each criterion. For example, with regard to a given criterion, 1 = poor, 2 = OK, and 3 = good. The scores are tabulated and used to obtain a total score for each project. A total score above a certain level is judged a "must do." Alternatively, projects might be ranked or grouped into priority categories based on their total scores.
In addition to the logical flaw associated with failure to prioritize based on the ratio of value-to-cost, scoring methods of this sort typically produce practical problems . Frequently, too many projects get high scores and/or are labeled must do's. If some types of projects, such as safety projects, are designated as critical, project proponents may place their projects in the critical categories even though the connection is small or indirect at best. For example, a project might be labeled a safety project because there is some influence on safety, even though it is clear that the very small decrease in the likelihood or severity of accidents that would result from the project could not justify the cost.
Assigning a number to something doesn't necessarily make for a more accurate method of measurement. If scores are subjectively assigned to measures without clear criteria, different people will assign wildly different scores for the same project, and the same person may assign different scores on different occasions. Regardless, middle scores are common for most projects, especially when numerous scoring criteria are used. High scores on some criteria cancel out low scores on others. Furthermore, most scoring models aren't sufficiently precise to trust small differences in total scores.
In any case, ranking projects by project scores will be incorrect unless those scores measure the ratio of project value to project cost. Most scoring systems don't claim to measure value. Even when they do, they often fail to properly scale results to project cost. Recognizing this problem, some have proposed dividing the points assigned to projects by project costs. However, experience shows project scores (unlike bids) don't vary as widely as project costs (a bias known as insensitivity to project scope). Consequently, this approach tends to result in more expensive projects being ranked low. Another idea is to apply a scaling function to scores, so that higher scores result in a proportionally higher number of points being assigned. When one looks carefully at scoring methods, it becomes clear that more fundamental approaches are needed to capture project value for the purpose of prioritizing projects.
Another problem with quantitative methods is that they can be easy to misapply. Some also lack any grounding in decision theory which can cause their project rankings to exhibit strong biases. For example, although nearly all quantitative methods rank projects using some computed measure of project attractiveness, some don't divide the measure by project cost (as they must to obtain the correct ranking metric). (Even if they did, the ranking would be in error because their ranking metric is at best an ordinal utility, not a cardinal one.) The result of failing to consider each project's claim on a limited capital budget is that the most expensive projects typically rank high while inexpensive projects are ranked low. When users see a clear bias in a quantitative method, they quickly stop using the method.
Improve Project Data
Even the best project prioritization process will be worthless without adequate project data. "A micrometer won't help you measure a cloud." Thus, one way to improve the prioritization process is to improve the quality of available project information.
The first step to getting better data is to make sure that information requirements are well-defined. If project proponents are clear about what information is needed to enable their proposals to be considered, they are much more likely to supply that information. Thus, the templates for collecting data on project proposals must be complete and precise.
Second, there should be a culture and expectation that rigor is required to generate project proposals. Estimates and forecasts should be backed by reason and analysis. Project proponents need to do their homework before the project gets proposed up the management chain.
Third, the organization should be prepared to allocate increased resources to project planning. Skill, experience, and true cross-functional collaboration are often needed to generate solid project proposals. Inevitably, increasing the effort devoted to preparing project proposals detracts from the resources available for actually doing projects. However, as previously asserted, the tradeoff in improved decision making will likely be worth it.
Note that a lack of adequate systems for collecting detailed, quantitative project data should not rule out attempts to implement PPM. In my experience, project evaluation systems based mostly on data generated subjectively through "best professional judgment" can perform surprisingly well, so long as the proper judgments are being generated, those providing the judgments are knowledgeable and unbiased, and the correct logic is used to translate the judgments into project priorities. Invariably, PPM acts as a "forcing function" that causes the organization to improve its ability to collect and document essential business data.
Estimate Cost, Value and Risk
Using the right criterion to prioritize projects, the ratio of project value to project cost, is critical, as it provides the "true north" for guiding project selection decisions. That ratio can serve as the basis for establishing a common understanding across the organization of what is important. Thus, you can best support the prioritization process by providing value and cost estimates for each project, and by estimating risk, as risk impacts project value.
Admitedly, Coming up with the required estimates, can be difficult. In the case of costs, be sure to use full cost accounting. When evaluating proposed expenditures, some organizations make the error of detailing only non-routine costs, such as one-time, "build" costs, external contractor and consulting costs, and technology costs. I've seen corporate project costing directives that advise project proposers to "exclude internal resources, licensing and ongoing maintenance fees, and marketing, customer service and support." Such policies invite "cherry picking" project costs during evaluation and can put the organization at risk of selecting projects which do not produce the expected or returns and, at worst, the company could be funding initiatives that actually reduce overall profitability and value.
As mentioned earlier, project costs include not just the funding request, but also any funding provided from other sources plus the opportunity cost of using equipment, personnel, raw material and any other "non-costed" resources that will be employed by the project. Also, all future costs necessary to obtain project benefits, including future operating and maintenance costs, should be identified, estimated, and included in the calculation. A project to install a $100,000 building security system, for example, will likely produce future costs associated with the necessary labor to operate and maintain the system.
Some companies still do not track costs at the project level, relying instead on the general ledger system to impute approximate project costs. Tracking project costs is essential to encourage accurate estimating and provide budget data needed to make, monitor, and update project decisions. The foundation for effective PPM includes a finance system that tracks labor costs using fully burdened labor cost rates for roles and individual resources.
Estimating value can be even more difficult than estimating costs. Establishing a PPM office, creating a database of available projects, and instituting forced ranking of projects helps, but the project portfolio won't be optimized without the ability to estimate project value. Thus, the key to reaping the full benefit of PPM is implementing a formal, organized, and logical method for measuring project value.
Establishing logic for quantifying project value, in my opinion, is not only critical to obtaining accurate and consistent project priorities, it is critical to justifying the role of the PPM office within the governance system for the enterprise. PPM is fundamentally different from project management with regard to the governance structure. Project management, primarily concerned with achieving project deliverables, is largely a tactical function. Delegating the function without providing formal systems to ensure compliance with executive preferences raises no issues. However, PPM is focused on making project decisions intended to achieve the fundamental and strategic objectives of the organization. To justify the delegation of the portfolio management function to a team other than the organization's most senior executives requires a formal prioritization process that makes explicit what would otherwise be the implicit preferences of senior management. Thus, PPM demands that systems be in place to help managers measure the value of projects consistent with the organizations fundamental objectives and strategy as established by senior executives. This leads to the most interesting part of the discussion—developing the metrics and models for measuring project and portfolio value. The next part of this paper explains how this may be accomplished.
References for Part 2