Lee Merkhofer Consulting Priority Systems
Implementing project portfolio management

"The best method for assessing a single attribute value function depends on its qualitative characteristics"

Assessing Single-Attribute Value Functions

As explained on previous pages, von Neumann-Morgenstern utility theory provides a basis for creating a model for estimating the value of projects [1]. The key is defining project objectives and performance measures that are mutually, preferentially independent. If this requirement is achieved, then project value may be computed using an additive equation consisting of single-attribute value functions (also called scaling functions) and weights:


V(x1,x2...xN) = w1V1(x1) + w2V2(x2) + ... + wNVN(xN)


On this page I describe how to determine the single-attribute value functions, the Vi in the above equation. Figure 32 identifies this step and shows how it relates to my 12-step process for creating a project selection model.



Steps for creating a project selection model

Figure 32:   Steps for creating a project selection decision model.


Characteristics of Single-Attribute Value Functions

A single attribute value function, Vi, translates the level of performance achieved for an attribute, xi, expressed in whatever unit of measurement was chosen for it, into a number indicating how useful or desirable that level of performance is perceived to be by the decision maker. With the additive value function, every performance measure has its own separate value function. If the decision maker accepts the utility theory axioms, and performance measures have been defined to be mutually preferentially independent, the theory guarantees that a value function, Vi, will exist for each of the attributes defined for measuring project performance [2].

To Normalize or Not To Normalize

As noted previously, many authors recommend that single attribute value functions be normalized such that zero is assigned to the worst performance level obtained by any alternative and one (or one hundred) is assigned to the best performance level obtained by any alternative [3]. By following this recommendation, each single-attribute function is standardized so that its value falls in the same 0-to-1 (or zero-to-100) interval. A benefit of such normalization is that there is a common interpretation for every weight; namely, each weight indicates the relative value obtained if performance for the corresponding attribute swings from the worst performance achieved for any alternative to the best performance achieved for any alternative.

A problem for using this form of normalization for a project priority system is that the worst and best performance levels are likely to change each time the system is applied. Suppose, for example, that a firm wishes to prioritize its sales representatives, and one performance measure is the sales person's "opportunity win rate," defined as the percentage of leads that are converted into sales. The range defined by the current year's worst and best win rates may not encompass the worst and best rates obtained next year. You could renormalize each year, but that won't let you compare prioritization metrics across years. You could select a larger range, for example, one based on the worst and best performance levels from a longer time span, but that still won't guarantee that performance levels in all future year applications will fall within the defined range. For this reason, if single attribute value functions are normalized for use in priority systems, the functions are typically defined and normalized over a range that encompasses all theoretically possible levels of performance.


Choosing a range for normalizing a scaling function

Figure 33:   Alternative performance ranges.


For project prioritization, chances are good that for at least one objective and its corresponding performance measure, maximum and/or minimum (negative) performance levels will be unbounded. For example, the net revenue produced by the project is not necessarily bounded by any value. In some such cases, it may be possible to define limits to performance based on practical considerations, in which case the associated single attribute value function can be normalized between zero and one. Alternatively, you can decide not to normalize and rely on the weights to make the different units of value comparable.

Linear versus Nonlinearly Single Attribute Functions

Regardless of whether or not single attribute value functions are normalized, an important role for the functions is to account for non-linearities in the relationship between units of performance and the value of that performance. Figure 34 illustrates some possible shapes for (normalized) single attribute value functions intended to serve as scaling functions [4].

Scaling function shapes

Figure 34:   Alternative shapes for scaling functions.


Figure 35 below shows a frequently seen shape for single-attribute value functions that is, perhaps, made intuitive given the "performance" that the function is measuring [4]. Although the performance level is theoretically unbounded, obtaining additional units of performance produces decreasing marginal value, suggesting that the utility function asymptotically approaches its maximum value.

Example scaling functions

Figure 35:   Intuitively shaped scaling functions.


Assessing Single Attribute Value Functions

The best method for assessing a single attribute value function depends on its qualitative characteristics. For this reason, as Keeney and Raiffa advise, begin by determining the qualitative characteristics of the function [6].

Establish Qualitative Characteristics First

Qualitative characteristics include such basic considerations as whether the performance measure is continuous or discrete, whether it is bounded at the low end or high end, and whether preference increases or decreases as the number expressing the level of the performance measure increases. You should also be able to determine whether the function is likely linear in units of the performance measure and, if not, whether changes in performance matter more for low or high performance levels.

To provide an example, suppose you are prioritizing projects that impact environmental objectives and that one of the performance measures is visibility. One measure for visibility used by meteorologists is termed prevailing visibility, defined as the maximum horizontal distance that can be seen throughout at least half the horizon circle. In ideal weather conditions, it is possible to be able to see moderately sized objects up to about 12 miles away, although at sea level the curvature of the earth gets in the way at about 3 miles. So a performance measure for visibility relevant to a ship's captain is bounded at the low end at zero miles and at the high end at about 3 miles. Whether the single attribute value function is concave or convex (as indicated in Figure 34 by the red or green curves, respectively) depends on whether an amount of visibility improvement, say equal to one-quarter mile, is more desirable near the low or high end of the scale. Most people, I assume, would find it more desirable to obtain an additional one quarter mile of visibility when visibility is very poor than when visibility is pretty good. Thus, a single-attribute value function for visibility is likely to be convex, similar to the red curve shown in Figure 34.

The answers to the basic qualitative characteristics for a single-attribute value function can usually be determined by the analyst without need to question a decision maker. However, if there is doubt, you can, of course, pose some preference questions for the decision maker. In my experience, so long as you have some understanding of the organization's basic interests and means of measurement, you can answer these sorts of questions on your own.

Methods for Determining Single-Attribute Value Functions

I use variations of four methods depending on the characteristics of the functions

  1. Assess enough points to sketch the function. Several methods are available for assessing single attribute value functions [6, 7, 8, 9, 10]. The methods are fairly efficient though tedious if a large number of points on the curve require assessment. The major down side is the requirement for time and attention from a decision maker. To reduce the burden on decision makers, you could, as suggested above, apply the assessment process to yourself as the subject, and then check the results with a decision maker.
    • Bisection. As described earlier, the bisection method requires the subject to repeatedly identify performance levels with a perceived value halfway between the values of previously valued performance levels. Bisection requires that the value function is being defined over a continuous range of performance, and it is the most often used method for this case [11].
    • Direct rating. Values of zero and 100 are assigned to the least and most desired performance levels as anchor points. The values of other performance levels are simply rated, from zero to 100 in terms of desirability. Direct rating is most often used for constructed scales involving purely subjective measures, such as levels of aesthetic appeal.
    • Equivalent cost. Similar to cost-benefit analysis, A variation of the direct rating approach wherein the subject simply estimates a dollar-equivalent value for each level of performance. Useful when the performance measures are discrete and relate to things that may be purchased in the marketplace.
    • Difference standard sequence. After identifying and assigning zero to the least preferred performance level, the subject identifies a sequence of increments in performance from the least preferred level that are of equal incremental value. The approach requires the function to be continuous.
    • Ratio estimation. A relative value of 100 is assigned to the least desired performance level and the subject provides values for other performance levels by expressing, as a percentage, how much more desirable each performance level is compared to the least desirable level. As a final step, the value numbers can be scaled between zero-and-one (or zero-and-100). Ratio estimation is used mostly for discrete functions where all performance levels have some desirability.
    • AHP.Though AHP is most often used to assess weights, it can also be used to assess points for a value function. One approach is to divide the range into segments, and then use AHP to estimate the relative value of each segment [3]. The advantage of AHP is the opportunity to make use of its eigenvector approach as a means for dealing with response inconsistencies, especially likely if multiple subjects participate in the assessment process.
  2. Curve fitting. Another obvious approach is to select a mathematical form for the utility function, obtain a few points on the curve through the assessment process, and then fit the assumed mathematical form to the data [12]. The major advantage of curve fitting is being able to reduce the number of points that need to be assessed. Some of the common forms assumed for single-attribute functions are listed in the appendix.
  3. Linear in fundamental units. If the attribute is expressed in units of something that is of value in itself, as opposed to value for its uses, then a linear single-attribute value function is typically justified [10]. This situation occurs frequently in practice. For example, if protecting public or worker safety is an objective, and number of lives saved is the associated performance measure, value would likely be proportional to the number of lives saved (the value of saving the second, third, or fourth life ought to be no different than the value of saving the first life). If the performance measure simply counts the number of units of something with fundamental value, then the scaling function will most likely be linear. In the case where the relationship be value and units of performance is linear, I typically leave the single attribute function out and capture the transformation between units of performance and units of value via the weight.

    A related case is the situation where value is linear in the units of performance, but a scoring scale is used that is nonlinear in those units. In such cases, a single attribute value function is needed to "undo" the non-linearities imposed by the scoring scale. As an example, the figure below provides a scoring scale created for a system for prioritizing projects conducted by an electric utility. The scale is used to indicate the amount of electricity energy demand satisfied by a project, for example a project to construct a new electric distribution line that will provide power to new customers. Such a project creates value because it enables new customers to purchase electricity at a price that is below what it is worth to them (consumer surplus). Because the size of projects providing service to new customers can very over many orders of magnitude, the scale is logarithmic: Each level of the scale satisfies roughly ten times as much customer demand. Assuming value to customers is linear in the amount of new energy delivered, the associated exponential scaling function translates the scale level to a linear measure of value.
    Example scaling function

    Figure 36:   Example constructed scale and associated scaling function (single-attribute value function).


  4. Capture a known preference non-linearity. Sometimes, the specification of a scaling function is an opportunity to capture some known or suspected characteristic of preferences. The glossary provides an example, repeated here, also relevant to prioritizing projects for an electric utility. In this case the single-attribute function deals with the relative loss of value customers experience from power outages of various durations. Based on company data showing the distribution of the durations of outages within various service areas, the utility can use the distributions and the value function to compute the total value lost by customers due to extended-duration outages.

    An example scaling function

    Figure 35:   A scaling function designed to capture non-linearity in preference.


    In this example, the shape of the value function was chosen to account for the fact that outages lasting longer than 6 to 8 hours are more costly to customers. The reasoning is that a long-lasting outage may cause the customers' refrigerated food to spoil. Due to the chosen shape of the curve, a project that would reduce outages that normally take 12 hours to correct down to 2 hours would increase value by 90 - 10 or 80 utiles. In contrast, a project that would eliminate an outage that normally lasts 2 hours would only provide 100 - 70 or 30 utiles. Thus, projects that address the longest lasting outages are valued more highly.

The next page provides advice for creating a consequence model for simulating the consequences of conducting a project.

References

  1. J. von Neumann & O. Morgenstern, Theory of Games and Economic Behavior, Princeton University Press 1947.
  2. R. L. Keeney and H. Raiffa, Decisions with Multiple Objectives, Wiley, New York, 1976.
  3. V. Belton and T. J. Stewart, T.J., Multiple Criteria Decision Analysis: An Integrated Approach,. Kluwer Academic Publishers: Dordrecht, 2002.
  4. T, L, Ramani, L. Quadrifoglio, and J. Zietsman, "Accounting for Nonlinearity in the MCDM Approach for a Transportation Planning Application," IEEE Transactions on Engineering Management, 57(4) 702-710, 2010.Wallenius, J., J. S. Dyer, P. C. Fishburn, R. E. Steuer, S. Zionts, and K. Deb. "Multiple Criteria Decision Making, Multiattribute Utility Theory: Recent Accomplishments and What Lies Ahead." Management Science, 54 (7), (2008): 1336-1349.
  5. R. P. Hamalainen, "Introduction to Value Tree Analysis, eLearning Resources, Helsinki University of Technology Systems Analysis Laboratory, http://www.eLearning.sal.hut.fi, p. 4, 2002.
  6. R. Keeney and H. Raiffa, Decisions with Multiple Objectives: Preferences and Value Tradeoffs. New York, NY, USA: Wiley, 1976.
  7. J. S. Dyer and R. K. Sarin, "Measurable Multiattribute Value Functions," Operations Research 27(4) 810-822, 1979.
  8. P. H. Farquhar, "State of the Art—Utility Assessment Methods," Management Science, 30(11) 1283-1300, 1984.
  9. R. L. Keeney, "The Art of Assessing Multiattribute Utility Functions," Organizational Behavior and Human Performance 19(2), 267-310, 1972.
  10. R. L. Keeney and D. von Winterfeldt, "Practical Value Models," in Advances in Decision Analysis From Foundations to Applications eds. W. Edwards, R. E. Miles, Jr. and D. von Winterfeldt, Cambridge University Press, New York, 104-128, 2007.
  11. J. C. Fast and L. T. Looper, Multiattribute Decision Modeling Techniques: A Comparative Analysis Metrica Inc., San Antonio TX 1988.
  12. R. L. Keeney, "The Art of Assessing Multiattribute Utility Functions," Organizational Behavior and Human Performance 19(2), 267-310, 1977