A measure of an individual's or organization's willingness to accept risk when making choices. Risk tolerance may be assessed and
quantified as a parameter in a utility function. See the section of the paper on risk for how this may
Stands for Software as a Service (typically pronounced "sass"). A means for making software applications available to customers, typically over the internet. The
software is not sold for local installation but is made available as a service on a subscription basis. Many project portfolio management (PPM) tools,
especially those with less sophisticated analytics, are provided as SaaS. More information on SaaS is provided in the paper chapters on PPM tool differences
and on PPM tool costs and risks.
A term used to describe a safe testing environment with controlled or limited access within which a user or application program can "play" without risking damage to
the a larger system. Project portfolio management (PPM) tools are often advertised as providing a "sandbox" for users to enter and analyze project data
without committing that data to the project database available to other users.
A graduated range of numbers used as a means for measuring something. In the context of project prioritization, scales are
used as a basis for assigning numbers to projects or to specified attributes of the projects. The numbers are
combined using an aggregation equation, and the results of the computations used to select or prioritize the projects.
Because the goal for project selection is to choose projects that collectively create the most value, the scales and associated
aggregation equation should produce estimates of project and portfolio value, although that is not always the case. For example, in the most common scoring model used by many project portfolio management (PPM) tools, the assigned numbers (scores) are simply
weighted and added, without regard to whether the specified scales allow such operations (or whether the results have anything to do with project value). One requirement for producing
estimates of project value is the use of the proper kinds of scales.
The most familiar type of scale for measuring project attributes is a natural scale. A natural scale is a commonly used scale for the attribute being measured.
For example, project cost is commonly measured in dollars (or other currency), and time to complete may be measured in months.
Although it might seem that it would always be best to use natural scales for quantifying project attributes, that is not the case. A natural scale for the attribute
in question may not exist, or there may be so many considerations relevant to prioritization that defining a natural scale for each would make project assessment too complex. When
natural scales do not exist or are inconvenient to use, context-specific constructed scales may be defined. For example, a four-level constructed scale for "impact on community jobs"
might be defined as:
Instructions: Select a score, that on balance, best reflects your judgment of the impact on jobs.
Note: You may choose a score between the integer values (e.g., 2.5)
A sample constructed scale.
In this example, the scores on the scale for impact on jobs are defined in terms of two attributes, the number of jobs created and salary, both expressed in natural
A critical consideration for defining scales is whether those scales will allow the computations needed to compute project priorities. In this regard, there are three
major categories of scales: ordinal scales, interval scales, and ratio scales.
An ordinal scale is one that merely indicates how something is ranked. For example, 10 projects could be ranked in terms of preference. The project ranked 2 is
preferred over the project ranked 3, but the scale doesn't indicate by how much. Ordinal scales tend to be easy for people to apply, but, because the differences between scale numbers
are not specified, no mathematical operations can be meaningfully applied to the scores from such scales. Thus, for example, if a scoring model uses an ordinal scale to rank projects
based on two criteria, say "impact on corporate image" and "financial attractiveness," adding or averaging those rankings would lead to meaningless results.
A scale is interval scaled if the numbers assigned are in units of equal magnitude. The Fahrenheit and Celsius scales for temperature are examples of interval
scales—in each case the scale units (degrees) don't change at different locations of the scales. The difference between 100 degrees and 90 degrees is the same difference as
between 90 degrees and 80 degrees. Because distances between numbers in an interval scale have meaning, the numbers assigned with such scales can be added or subtracted. Thus, for
example, if you're using interval scales you can use aggregation equations that weight, add, or subtract the assigned numbers. Weighting and adding scores from scales that are not
interval scaled is one of the most common errors made in the design of priority systems.
Although you can meaningfully weight and add interval scaled numbers, you cannot generally multiply them. This is because interval scales may have arbitrary zero
points. The Celsius temperature scale arbitrarily defines zero as the temperature at which water freezes. This makes ratios expressed using scale numbers meaningless—a temperature
of 50 degrees is not twice as hot as a temperature of 25 degrees, as demonstrated by the fact that the corresponding temperatures on the Celsius scale, 10 and -3.9 degrees are not in
the ratio 2 to 1.
A scale is ratio scaled if, in addition to being interval scaled, it has a zero level that corresponds to "none of the attribute" (also referred to as an
absolute scale). With ratio scales, attributes are assigned numbers such that (1) the differences between the numbers reflect differences in the amount of the attribute and (2) ratios
between the numbers reflect ratios of the attribute. This ensures that the numbers assigned to various degrees or amounts of the attribute bear a direct relationship to the absolute
amount of the attribute. With such a scale, we can say not only that one project has so many units more of a attribute than a second project, but also that the first project has so many
times "as much" of the attribute than the second project. Time measured in months, for example, is a ratio scale. A project that requires 4 months takes twice as long as a project that
requires 2 months. Costs and probabilities are defined on ratio scales. Also, the weights defined in the additive value function are defined along a ratio scale. All arithmetic
operations are permitted on numbers that fall along a ratio scales. Also, you can multiply ratio scale values by interval scaled values.
To help avoid errors in project selection decision models, you should strive to define scales that are ratio scaled, or be careful not to apply computations that
aren't permitted for the types of scales you've defined. The example scale for job creation shown above is neither interval scaled nor ratio scaled. However, it is anchored in natural
units that give meaning to differences in scale values and has an absolute zero. In such cases it is often possible to use a scaling
function to translate the such scales to alternative units such that the result is ratio scaled. For example, if total income from new jobs is viewed as a reasonable measure for the
impact on jobs, a scaling function for making the measure ratio scaled would be the product of the number of jobs created and the average salary for those jobs.
A functional relationship for translating scales. For example, if the units on a scale aren't of equal magnitude (e.g., as in a
logarithmic scale), a scaling function might be applied to convert the scale into one with equal-sized units. This approach is often used to obtain interval scales that allow
differences in scale values to reliably indicate differences in the amount of the measure.
A common use of scaling functions for project prioritization is to convert a measure of project performance against relative to some objective into a measure
indicating the value of that level of performance. In this instance, the scaling function may be referred to as a value curve or value scaling function, a scaling function
is a functional relationship used in a decision model that translates a level of performance, as expressed by a performance measure, into a number that indicates the value or desirability of that level of performance. In
the example below, which might apply to an electric utility concerned with quickly restoring service to customers without power, the x axis denotes the amount of time the customer is
without power. The y-axis is a relative measure of value, defined such that 100 indicates maximum value and 0 represents the value associated with the worst level of performance that
the utility expects could occur (in this case, an outage lasting 24 hours). In the example, the scaling function is non-linear to reflect that fact that residential customers will often
suffer greater losses when the duration of an electric outage approaches 4 to 8 hours, because, for example, an outage of such duration may cause refrigerated food to spoil.
A sample scaling function
A decision model may require a scaling function for each of its performance measures. However, if the incremental value of obtaining a unit of improvement as expressed
by the performance measure does not depend on the current level of performance, the scaling function will be linear (a straight line). Performance measures are often defined in such a
way that the linear assumption holds.
Mathematically, a scaling function has the form V = S(p), where "p" is the performance measure, "S" is the scaling function, and "V" is the measure of value. The
differences in the values of V produced under various levels of performance p indicate by how much the higher levels of performance are preferred. As in the above example, by
convention, a scaling function often expresses value on a zero-to-100 scale. In technical terms, a scaling function is a single-attribute utility
function—a utility function with only a single independent variable for measuring performance.
An internally consistent description of a possible sequence of events, or situation, based on assumptions and factors chosen by the scenario creator. Scenarios are
commonly used as a basis for estimating the implications of taking some action. For example, a project might be evaluated assuming one or more
scenarios, or visions of what the future might bring.
Scenario analysis involves using multiple scenarios and considering the implications of those possible futures. Scenario analysis is a form of risk analysis in
that it helps to create understanding of the implications of uncertainty.
Scenarios have been likened to "mental movies," and the term is the same as that used in the film and television industry to describe the script that ties a story's
events together. Creating scenarios is a common technique for forecasting the possible consequences of situations or actions, especially in support of long-range planning.
A table, displayed on single page or screen, that summarizes the results of applying a scoring model. The purpose is to
quickly convey information conveying judged performance (e.g., project performance) relative to various dimensions or objectives of interest.
An assessment of performance that involves assigning a score based on one or more predefined scales.
A type of decision model often used for project selection that involves scoring projects against multiple criteria. Various criteria (considerations) for choosing projects are identified.
These typically include financial criteria (e.g., net present value), plus criteria related to customer service, safety, contribution to strategy,
risk, etc. Each project is evaluated (scored) against each criterion, and the scores are combined in some way to obtain an overall measure intended to represent the attractiveness or to
provide a figure of merit for each project. Scoring models differ mainly in how the criteria are defined and measured, and how the individual
assessments are aggregated to obtain an overall project figure of merit. Such differences significantly affect the complexity, information requirements, reliability, and defensibility
of the model.
Although there are a number of sophisticated, multi-criteria decision models that involve scoring, including AHP, ELECTRE, goal programming, and PROMETHEE, the term scoring model typically refers
to the least complicated type of multi-criteria model wherein the project figure of merit is obtained by simply adding, or, more commonly, weighting and adding, the scores assigned to
the individual criteria. This results in a method of evaluation that is very simple to implement and understand, but one that, typically, is not very reliable or defensible. Many, if
not most, project portfolio management tools are limited to using this type of simple scoring model for evaluating or ranking projects.
There are three types of scoring models: checklist models, un-weighted scoring models, and weighted scoring models:
- With a checklist model, the criteria are expressed as yes/no statements (e.g., "Payback period less than 5 years", "Project
involves no safety risk") listed in a table. Individuals then evaluate each project by indicating (checking) those criteria from the list that the project satisfies. The checks are
counted for each project, and the totals are used as the measure for ranking the projects. Since a check counts as a score of "1" when totaling scores, the checklist model is
sometimes referred to an un-weighted, 0/1 factor model.
- An un-weighted scoring model is similar to a checklist model, but allows for gradations in project scores. Instead of expressing the criteria as yes/no statements, a scale
is used. Often, a 5-point scale is selected, where 5 means the project is very good with respect to the criterion, 4 means good, 3 means fair, 2 means poor, and 1 means very poor. The
scores are summed, and the totals are used as the measure of project attractiveness.
- A scoring model is a weighted scoring model if it allows weights to be assigned to the criteria. A weighted scoring model has the mathematical form:
where Sj is the total score for the jth project, N is the number of criteria, wi is
the weight assigned to the ith criterion, and sij is the score of the jth project on the ith criterion. The weights are typically
assumed to represent some concept of the relative importance of each criterion. Methods used for assigning weights include paired
comparison, AHP and the swing weight method. Although it is not necessary and has no effect on relative rankings, weights are often
scaled to sum to one, expressed mathematically as:
This allows the weight on each criterion to be interpreted as the percent of the total weight assigned to that particular criterion.
The main advantage of scoring models is that they provide a way to capture the multiple considerations that are relevant when deciding whether or not to conduct a
project. Scoring models are very easy to create and simple to understand. A scoring model can easily be implemented in Excel or one of the other standard computer spreadsheet tools. The
model is flexible and can be easily altered or changed to accommodate changes in organizational preferences or managerial policy. Another advantage of a scoring model is that although
the model is developed to support project selection, that same model can be used as a guide for project improvement. A project's scores on each criterion can be compared with the best
possible score. The differences, when multiplied by the weights, indicate the types of improvements that would most improve the project's attractiveness as measured by the scoring
The main disadvantage of scoring models is that the model output is typically not a reasonable measure of the value of doing the
project. Without a sound measure of project value, it is impossible to know whether the project is worth its costs or to identify the portfolio of projects that produces the most value
given the resources available. Under the standard scoring model, the mathematical equation for computing total scores is linear, implicitly assuming that a unit improvement on any
criterion always contributes the same amount to project attractiveness regardless of how well the project performs against that criterion or on any other criterion. Many relevant
criteria, such as risk, can't be reasonably captured using linear equations.
Also, because it is so easy to define criteria, it is common for scoring models to contain many criteria. The criteria often overlap or represent similar or related
objectives, and this overlap can produce significant biases. Such errors can be reduced by placing restrictions on how criteria are defined and measured, but this complication
effectively means applying a different approach (see multi-attribute utility analysis). Such complications, though needed for accuracy, eliminate the
simplicity of design that is the main attraction of scoring models.
A method for determining how the variation in the outputs of a model depend on variations in the model's various inputs and other assumptions. In the simplest form of
sensitivity analysis, each input variable is varied over a range representing its uncertainty, and the impact on model outputs is observed. Those variables that produce the biggest
changes to model outputs are identified as the variables whose uncertainties are most critical to model predictions. Other forms of sensitivity analysis involve varying the structure of
the model, or its underlying assumptions, and observing the affect on outputs.
Simulation is a form of sensitivity analysis which can be used to explore how simultaneous variations in the values of input
variables affect model outputs. Other forms of sensitivity analysis show how variations in the outputs of a model can be apportioned to different sources of variation in inputs.
Sensitivity analysis is useful for many purposes. For example, it can indicate where additional effort might be most useful for improving confidence in model
predictions. Suppose a sensitivity analysis showed that a small change in the assumed growth rate for the market served by a new product results in a very large change in the computed
value to be derived from that product. The result would suggest that it might be useful to use a probability distribution to
describe uncertainty in market growth rate and to use a probabilistic analysis to characterize the resulting uncertainty in the value of the new product. Additionally, the result would
suggest that it may be worthwhile to devote additional effort to estimating market growth rate before committing to produce the new product. Furthermore, it would suggest that, after
introducing the new product, the growth in market size should be measured and tracked closely to support future decisions regarding the product.
Sensitivity analysis can be used to test a model and explore how closely it corresponds to the real world processes that it is meant to represent. Depending on the
results of such tests, sensitivity analysis will identify errors that need to be corrected or build confidence in the model and its predictions. In this way, sensitivity analysis
promotes model improvement via application of the Scientific Method.
A quantify that may be computed using some project portfolio management tools. Suppose the tool uses an optimization engine to identify the project portfolio that produces the greatest portfolio value subject to meeting some
constraint, such as a maximum allowable budget year cost. A natural question would be, "By how much would the value of the optimal portfolio increase if the constraint were relaxed?"
The shadow price for the budget constraint is the amount by which portfolio value could be increased if the constraint were relaxed by one dollar.
More generally, the shadow price on a constraint defined for an optimization is the amount by which the objective
function would increase if the constraint were relaxed by one unit. Capability to compute shadow prices for resource constraints can help organizations identify resources that they
may want to increase.
A technique for predicting or analyzing the outcomes of a real world situation using an analytic model represented within a computer program. In the context of
project portfolio management, simulation typically involves predicting the consequences of individual projects or
portfolios of projects. The simulation model takes as input assumptions regarding the project and produces as output project consequences relevant to the achievement of the
organization's objectives (these outcomes are project or portfolio performance measures). The
simulation process involves generating scenarios consisting of assumptions for the project or project portfolio and using the model to determine
(simulate) what the corresponding business consequences might be. Monte Carlo simulation is a form of simulation that involves using a built-in
random process to select assumptions for the scenarios. The distribution of model outputs is then used to assign probability
distributions representing uncertainty over project or portfolio consequences.
A dynamic simulation is one wherein the model represents the time sequence by which the various relevant changes and impacts occur. For example, a model for
simulating a new product development project might first represent the attributes of the product likely to result from the project, then represent the sales likely to occur based on
those product attributes, and, finally, translate those sales into a corresponding revenue stream for the organization.
In theory, any project outcomes that can be anticipated and represented as mathematical cause-effect or influencing relationships can be simulated. In practice,
however, simulation is often difficult because there are so many factors that influence outcomes and those influences are complex and only partially understood. An efficient simulation
captures only those factors and influences that are most important.
A popular business methodology, developed originally by Motorola in the 1970s, for improving the quality of business process outputs. Some project portfolio management tools incorporate templates and aids to support Six Sigma as applied to projects.
The Six Sigma methodology aims to identify and remove the causes of defects (errors or variations in process outputs) that lead to customer dissatisfaction. There are
five steps in the methodology (abbreviated DMAIC): (1) define the customer and business goals for the process, (2) measure defects in the performance of the current process, (3) analyze
the data to identify root causes of defects, (4) improve the process to reduce defects, and (5) control the variables that cause defects. Six Sigma defines metrics for measuring process
quality, employs statistical analysis, and establishes an infrastructure of people within the organization to advance the methodology ("Green Belts," "Black Belts," etc.). The term "six
sigma" refers to a concept in statistics for measuring how far a given process deviates from perfection, and suggests that errors be reduced to at most a few per million.
The steepness of a curve at some designated point. The slope of a curve or line indicates how much change in the dependent y-variable occurs when the
independent x-variable changes one unit. A horizontal line has a slope of zero. A line that makes a 45 degree angle with the x-axis has a slope of one.
A multi-criteria analysis method originally developed in the 1970s by behavioral psychologist and decision analyst Ward Edwards.
SMART is an acronym describing desired characteristics when specifying decision objectives—the objectives should be Specific, Measurable,
Achievable, Realistic, and Time-based. SMART has been incorporated into many decision aiding tools and several project portfolio management tools rank
projects using the technique. Compared to multi-attribute utility analysis, SMART makes simplifying assumptions
for the purpose of enabling quick assessment techniques.
SMART recommends a multi-step ranking method that begins with identifying the criteria, or value dimensions to be used for evaluating alternatives. The value
dimensions are then ranked based on judged importance, and the least important dimension is assigned a value weight of 10. The next-to-least-important dimension is assigned an
importance weight representing the ratio of its relative importance to that of the least-important dimension. For example, if this dimension was viewed as twice as important as the
least important dimension, it would be assigned a weight of 20. Weights are assigned to the other dimensions in the same way, preserving importance ratios. The weights are then
normalized to sum to one by dividing each weight by the sum of all of the weights. Each alternative is then rated on each dimension using a zero-to-100 scale. The ratings are weighted
and summed, and the results used to rank the alternatives.
The simplifications inherent in SMART can lead to errors. In particular, if the value dimensions are not preferentially independent (e.g., if the importance of a dimension depends on the performance of the alternatives with respect to some
other dimension) or if value is not directly proportional to rating (e.g., if a rating of 50 is not half as valuable as a rating of 100, in which case a scaling function is needed), then there may be significant errors in the rankings produced by SMART. Edwards and colleagues also developed
"improved" versions of SMART, called SMARTS and SMARTER. SMARTS is simply SMART using the more defensible swing weight method for