The Decision Protocol is intended to be a tool to help US Forest Service decision teams work through complex business and environmental decisions. It is an administrative aide that introduces the professional to the principles of decision science, outlines useful steps, and provides sources of information and techniques for improving decision quality. The Protocol is not and should not be viewed as formal Forest Service guidance or policy. Forest Service teams are not required to use the Protocol; its recommendations are not legally binding. Members of the public or other agencies are welcome to participate in Protocol-based projects or use the Protocol or any of its concepts or parts, but their use is strictly voluntary. The Forest Service is not responsible for the consequences of applications or misuse of the Protocol outside the agency.



* Describe and quantify the expected consequences of the alternatives.

* Characterize the validity, reliability, and uncertainty in the consequence predictions.


* Description, by resource attribute or effect, of the information base, major elements of uncertainty, prediction comfort levels, and new information to be collected.

* Matrix of expected consequence for each alternative


* Agree on measures for each attribute of the objectives or side effects.

* Evaluate the information with which to predict the consequences.

* Describe uncertainties that could influence actual outcomes.

* Describe and evaluate information that would improve predictions.

* Describe important cause-effect relationships between activities and measures.

* Set acceptable levels on measure values.

* Predict consequences for each alternative.

* Describe rationale for predictions.

* Identify consequences that are not acceptable or otherwise significant.

* Cycle back to DESIGN and refine activities.


Put a check beside each statement below that is true about any analysis of consequences you have already begun. For each statement unchecked, work through the CORE QUESTION suggested and/or describe what should be done to bring this part of the consequences analysis "up to grade". If you check fewer than half of the questions, work completely through the CONSEQUENCE cycle questions.

____ The consequences are defined in measures that reliably indicate change in the situation. If not, go to CONSEQUENCES question 6.

____ The levels of acceptability placed on potential consequences are clear and defensible? If not, go to CONSEQUENCES Question 7.

____ The range of consequences analyzed is thorough. If not, go to CONSEQUENCES Questions 1-3,6 and 8.

____ The consequences of taking no action are clearly described. If not, go to CONSEQUENCES Questions 8 and 9.

____ The analysis fully considers the needs of special environmental, human, or other values. If not, go to CONSEQUENCES Question 6.

____ The predictions of environmental and other consequences are consistent with the facts and information documented in the analysis. If not. go to CONSEQUENCES Questions 1-5, 8 and 9.

____ The analysis considers interactions with activities and consequences at larger and smaller geographic scales. If not, go to CONSEQUENCES Questions 4 and 8.

_____ The analysis anticipates the influence of large-scale disturbances and natural events. If no, go to CONSEQUENCES Question 4.

_____ The analysis anticipates the influence of uncertain events of social, political, managerial, and economic origin. If not, go to CONSEQUENCES Question 4.

__ Public and stakeholder issues and concerns about consequences are addressed in the analysis. If not, go to CONSEQUENCES Questions 6 and 7.

_____ The analysis clearly distinguishes between facts and values that influence the consequence predictions. If not, go to CONSEQUENCES Questions 1-5, and 8.

_____ The analysis adequately integrates published scientific information with knowledge of professional resources managers and specialists. If not, go to CONSEQUENCES Questions 1-3.

_____ The analysis process interprets and uses experiences from similar projects. If not, go to CONSEQUENCES Questions 1-3.

_____ New information that might influence the predictions of possible consequences is clearly described. If not, go to CONSEQUENCES Question 5.

____ The analysis confronts and explains sources of uncertainty. It clearly describes what is not known and the implications of this lack of knowledge. If not, go to CONSEQUENCES Questions 3 and 4.

_____ Analysis team members are candid about their biases and their limitations that could influence the accuracy of the consequence predictions. If not, go to CONSEQUENCES Question 9.

_____ Dollars spent on information collection and analysis are efficiently allocated? If not, go to CONSEQUENCES Question 5.

_____ Reasons for not conducting detailed reviews of some alternatives are clearly explained. If not, go to CONSEQUENCES Questions 10 and 11.

_____ Predictions and judgments about acceptable consequences are documented so that future managers can learn from them. If not, to CONSEQUENCES Questions 6-9.



CONSEQUENCES Question 1. What information is available to help characterize and predict consequences?

For each measure on which you choose to predict consequences, describe the information and data that are available. Consider:

Databases: Geographic information systems, monitoring programs, cooperative studies with researchers or other agencies.

Scientific Literature: Computerized literature searches, journal and government publications, ongoing research at agencies and universities, and other sources. Include material that presents opposing theories and scientific disagreement. These divergences may be important in informing and refining the final decision.

Expert Judgment: Identify experts.


* The basis of their expertise -- applied research, basic research, managerial consulting, other.

* How well-regarded they are and by who.

* How relevant their specialties are to the attributes and the cause-effect relationships.

* How experienced they are in the kinds of systems or decisions that will be made.

Similar Projects: Locate projects or programs with activities similar to those in your alternatives.


* Whether the consequences of these projects were what you suspect might happen in your case.

* What the greatest similarities are.

* What the major differences are.

* What activities of these projects have been the most difficult and/or expensive to evaluate.

* What consequences the predictions have been the most accurate about. The least accurate.

Public Perspectives:


* What evidence, theories, or expert judgments the public can provide.

* The credibility and objectivity of the sources.

CONSEQUENCES Question 2. How certain (confident) are you that this information is a good basis for accurately predicting consequences?

For each measure rate (0=none to 10=highest) for your confidence level (degree of certainty) in the ability of the information and expertise to predict the value of the measure at the time and geographic scale you have chosen. Do not confuse ability to predict with the your ability to measure, monitor, or control. Be specific about whether your rating reflects confidence in predicting direction of change in the measure or its end value. You may need to work through CONSEQUENCES Question 6 (below) to reach final agreement on the full set of measures that will cover not only the objectives but also any side effects.


* Reliability of the information sources

* How clearly the measure has been specified

* The natural variability of the measure

* Your personal experience with the measure

* Your biases in making such predictions

Record your confidence ratings (0-10) in the CONSEQUENCES Table 1 (after question 5)

CONSEQUENCE Question 3. What are important gaps in knowledge for predicting consequences?

Describe the kinds of knowledge or expertise that would be required to increase your confidence to an acceptable level.

Consider improvements in the following:

* Inability to measure the attributes.

* Ability to extrapolate data in time and space.

* Availability of baseline data.

* Adequacy of models or experts to explain relationships and variability.

* Disagreements among specialists about cause-effects relationships.

For each measure, list the knowledge gaps in CONSEQUENCES Summary Table 1 (after question 5)

CONSEQUENCES Question 4. What uncertain events could confound your predictions?

Describe events or possible conditions that could cause outcomes to differ from your predictions.


* Natural disturbances: fire, windstorms, floods, insect or disease outbreaks, earthquakes, and others.

* Scientific discoveries

* Political events

* Economic conditions

* Social conditions

* Administrative changes

* Other categories

For each measure, list uncertain events in CONSEQUENCES Summary Table 1, Information Needs (after question 5).

CONSEQUENCES Question 5. What information is worth acquiring to improve your predictions? What would it cost?

List the information (if any) that should be collected to improve your degree of confidence to an acceptable level.


* Knowledge gaps and uncertain events (from above)

* Whether this new information would greatly change the predictions.

* Whether your predictions would be more solidly confirmed.

* The deciding officer's confidence in using the predictions to select an alternative. This also applies to stakeholder groups, agencies, or organizational units that will legitimize the decision.

* Whether it is possible to achieve the information confidence level you desire.

Estimate the cost in time, money, and effort to obtain this new information.

Explain why this information should be obtained. List the benefits in prediction improvement or other areas and explain why the benefits are worth the cost of the new information.

Display key information needs, including cost and rationale for acquisition in CONSEQUENCES Summary Table 1.

CONSEQUENCES SUMMARY TABLE 1. Information needs (Consequences questions 1-5)

Objectives measure 1

Side effects measure 1

Information source(s) (C-1)

Confidence rating (0 - 10) (C-2)

Knowledge gaps (for prediction) (C-3)

Uncertain events (C- 4)

Key information needs (C -5)

Cost of information collection (C-5)



CONSEQUENCES Question 6. What measures will you use to predict consequences of the alternative actions?

For every situation component or attribute that might be affected by the action, specify a measure that will be useful in judging differences among actions.

In the PROBLEM cycle you defined objectives in terms of situation components, their attributes, and measures by which you could gauge success. The CONSEQUENCE cycle asks you to confirm and clarify those measures as well as define measures for the unintended consequences (side effects) of implementing the action.

To be useful in predicting and characterizing consequences, measures should be:

Understandable - Decision makers and stakeholders must be able to understand what the measure is and how it characterizes the attribute.

Quantifiable - Capable of being quantified or classified into categories, with levels, values, or other units of measure. A scale of measure can be natural (e.g., temperature), proxy (e.g., status of an ecological indicator species) or constructed (e.g., fire danger index). A constructed scale may combine many measures into a single series of classes or levels. A common example is the "heart risk factor index" that uses genetic and lifestyle factors to evaluate the relative healthiness of the human cardiovascular system.

Sensitive - Responsive enough to environmental influences and management activities to show changes in the attribute.

Measure values exist in several dimensions:

Magnitude - value per unit of time or space.

Extent - span of influence in terms of geographic area, structural characteristic, functional process, or some other scale.

Duration - length of time the value (magnitude, extent, etc.) will continue.

Likelihood - the probability of a value becoming a reality.

Speed - the time to reach a value.

Defining a problem or predicting consequences involves choosing measures whose values are changing or will change because of the action. Usually, one or two dimensions change while the other dimensions stay constant. You either know or will have to make assumptions about the values of the constant dimensions. The variable dimension is the focus for problem framing and consequence analysis, but later your analysis should be tested against the assumptions you make here.

For example, one measure may be the acreage of a lake (extent) over which a chemical will achieve some minimum concentration (intensity) for a certain period (duration) with 95% likelihood (likelihood) in 3 years (speed). In this example, the acreage (extent) is the measure to be predicted while the values of the other measures are assumed constant. Another example would be the years (speed) for streams in a certain watershed (extent) to achieve a distribution of channel stability classes (intensity) for 100 years (duration) with 90% likelihood. The prediction would be limited to the speed of attainment.

A measure can be two-dimensional, presented as a two-axis system with one dimension, say "condition AAA" on the y-axis and "years to reach" a stage on the x-axis. The two way measures can be assessed as current (pairs of values) and proposed (pairs of values), representing a predicted shift because of the proposed action.

Some attributes seem impossible to quantify. Those can be addressed with a "normalized" subjective value scale, assigning 0 or 1 to the worst condition for that attribute and 100 to the best condition.

For each objective and anticipated side effect, record a measure in CONSEQUENCES Summary Table 2 (after question 7)

CONSEQUENCES Question 7. What are acceptable consequences on these measures?

For each measure, describe acceptable limits (threshold values) beyond which the consequences of the action are unacceptable. Measure values that exceed acceptable limits can signal devastating and irreversible consequences.

You will use these thresholds to evaluate each action, prioritize refinements, and compare the alternative actions. Activities whose consequences stay within acceptable limits may not need to be modified or mitigated. Uncertainty in predicting consequences may also be justification for refinement or elimination of an alternative, if violating a threshold is a possibility.

For each measure, describe acceptable limits in CONSEQUENCES Table 2. If appropriate, describe lower (minimum) and/or upper (maximum) values of acceptability.

For measures defined in the objectives of the action, the acceptable limit may be set at the desired value of the measure. A consequence not reaching this desired value would be unacceptable. For other measures, you may wish to set a safe minimum above which you believe will not create unacceptable losses. You can also establish a range of values from safe minimum to desired.

Describe the source and rationale for these thresholds. Why are the limits set at these values?

Limits of acceptability are derived from legal statutes, published standards and guides, the scientific literature, tradition, operating rules-of-thumb, safety programs, and professional judgment. Collaboration with stakeholders can help to establish, clarify or confirm limits of acceptability.

CONSEQUENCES SUMMARY TABLE 2. Acceptable limits (thresholds) (CONSEQUENCES Questions 6 &7)

Measure A - Project Objectives

Measure B - Side effect

Lower (minimum) value

Upper (maximum) value

Target (objectives) value

Sources and rationale for limits

CONSEQUENCES Question 8. What are the cause-effect relationships between the proposed activities and the consequence measures?

Describe the important connections between the activity and the measures selected.

Use a diagram (See Tools) to show how specific measures are affected by activities of the proposed action and other influences such as other projects and the natural and human influences you described as uncertainties in the PROBLEM cycle. Recognize that the proposed activity may be repeated or interact with other activities to produce cumulative consequences that are different than the sum of the simple effects.

Describe the relative strength and nature of these influences, especially the cumulation of several activities or events on a measure.

Describe cumulative effects of these influences in CONSEQUENCES Summary Table 3, Causes and Cumulative Effects.

Consider for each measure and alterative the following interactions and influences:

* Current activities and influences in the absence of the proposed action (from the cause/effect diagram).

* Simple (direct and indirect) influences between the activity and the measure

* Activities within the alternative. Groups of activities that are parts of the same proposed action may interact to change the value of the measure. Consider whether the combination of effects on the measure is additive, multiplicative (greater than the sum of the simple effects), or compensatory (less than the sum or even less than the simple effect).

* Other actions and projects. List activities from other projects that might interact with those in your alternative(s) to influence the measure.

These activities can be periodic, ongoing, or planned. Interactions can occur within or outside the analysis area.

* Disturbances. Describe likely disturbances or uncertain events and their interaction with activities.

* Pending or foreseeable decisions. Describe proposed policies or actions that could interact with activities in your alternative(s).

CONSEQUENCES SUMMARY TABLE 3. Causes and cumulative effect descriptions. (CONSEQUENCES Question 8)




Measure A:

Measure B:

Measure C:

Measure D:

Current and historical influences

Activity that affects the measure and direction (+ or -)

Activity that affects the measure and direction (+ or-)

Interaction of activities (additive, multiplicative, compensatory)


Activity/Influence and direction of adjustment (+ or -)

Other projects and activities

Planned and pending actions

Foreseeable actions and activities

Disturbances and uncontrollable events

Net cumulative influence

Net cumulative influence or direction

Cause-effect comments and description

CONSEQUENCES Question 9. What will the values of the measures be if the proposed activities in the alternative are implemented?

Predict the future value of each measure under the influence of the activities in each alternative. Consider the interactions and the cause-effect relationships you described in CONSEQUENCES Question 8. For each measure:

Estimate the current value.

Predict the future value without any of the activities in the alternative. This is the No-Action alternative. Be sure to account for changes that occur anyway.

Predict the future value under the influence of the activities proposed in the alternative.


* How consequences from recurring activities will accumulate, perhaps dramatically, into unacceptable changes.

* Consequences that may be manifest across different geographic or organizational spans.

* Consequences may be felt outside the analysis area.

Calculate the difference between the future value of the measure without the alternative and the future value with the activity. This difference is the effect.

For each measure, describe current value, future value, future value (consequence) without proposed action, future value (consequence) under the alternative and record in CONSEQUENCES Summary Table 4 (after question 11)

CONSEQUENCES Question 10. Which predicted values exceed acceptable levels?

Identify those measures whose future values under the alternative would be unacceptable.

Trace these unacceptable consequences back to activities. These activities will be high priority for DESIGN refinement.

CONSEQUENCES Question 11. Should the alternative be refined or eliminated from further consideration because of unacceptable consequences? How should it be refined?

For each activity that causes unacceptable consequences, go back to the DESIGN cycle, Questions 7-10. There may also be some key uncertainties to resolve so also recycle through the Information and Uncertainty Evaluation part of the CONSEQUENCE Cycle (Questions 1-5) to see if you need to collect new information.

Refine the alternative using these questions and then cycle forward through the CONSEQUENCE cycle to see if the refined alternative still produces unacceptable consequences. If there are still unacceptable consequences, determine whether these consequences are important enough to warrant eliminating the alternative from further consideration. This may require discussion with the deciding officer about tradeoffs and/or a reevaluation of the severity of the acceptable limits themselves.

List any alternatives that will be eliminated because of unacceptable consequences and your rationale for their elimination.

For each alternative that produces unacceptable consequences, record activities to be refined and suggested refinements in CONSEQUENCES Summary Table 4 (following).

CONSEQUENCES SUMMARY TABLE 4. Consequence Predictions. (Questions 9-11)




Measure A:

Measure B:

Measure C:

Measure D:


Current value


Future value without proposed action

Future value with the proposed action (cumulative effect of all activities and interactions)

Change resulting from the proposed action

Exceed acceptable limits? (Yes/Maybe/No)


Activities and interactions most influential

Activities to be refined

Refinements Suggested





Following are some tips and aides for working through selected CORE QUESTIONS in each of the cycles. Not every CORE QUESTION has a tip or tool, but as the Decision Protocol matures and your experience with it grows, we hope to add to the repertoire.


Note: There are no tips or tools for CONSEQUENCE Questions 3-6, 10-11.

CONSEQUENCES Question 1. What information is available to help characterize and predict consequences?

Your team should consider how much and what kinds of information it really needs. Information is always imperfect. Not all information is of the same value to the decision process, but all information has a cost. Question your sources of information. Be sure it is of the highest quality for the cost and makes a difference. Refer to the causal diagrams. Hone in on pivotal missing information.

Ask the team, including the deciding officer, to describe what they would need to be 100% confident. Expect differences among individuals on the team. For example, there may not be enough information about the alternatives are for the deciding officer. The deciding officer may want more and different information as the actual choice approaches and the tradeoffs become more visible and acute. The analysts on the team may want to protect their reputations by displaying how much information in the supporting documentation, even information that is not key in showing differences among the alternatives. New members may be more unsure than older hands and require more information.

Where would new information give different values for explanatory variables?

How would it change the values they predicted for the consequences of the proposed actions?

Which of the pieces of information would create the largest changes in the predicted values for the cost invested in them?

If the new information would not make a difference, why is it needed?

Ask for evidence of over- or underestimation of information needs in past experiences. How have team members handled similar questions in the past and what were the outcomes. What lessons did they learn?

Ask the team whether there are any better ways to get the same information. Are there possible sources that would be less perfect but cheaper? Could they let some things happen and then adjust the activities? This would be the beginning of an adaptive response strategy.

CONSEQUENCES Question 2. How certain (confident) are you that this information is a good basis for accurately predicting consequences?

Beware of overconfidence bias. Many people will say they are completely confident in their prediction, but their predictions in fact are far off the mark. The team should be able to predict the measure and explain how they made the prediction.

Ask members to describe what their level of confidence or certainty means. Why are they comfortable or uncomfortable with the information?

As a check on judgment, generate information that tries to disconfirm their predictions. If the expert "falls apart" or become defensive, he/she may not be as comfortable with the prediction as he/she thought.

CONSEQUENCES Question 7. What are acceptable consequences on these measures?

This part of the Decision Protocol is meant to help prioritize refinement efforts. The results do not imply that one attribute or consequence (change in a measure) is more important than another.

Remind the team that acceptable limits on the values of measures, not levels of activities. Setting limits on measures before they predict the effects of the alternatives helps them avoid biasing the prediction with preconceived notions about the activities.

Acceptability is an expression of "how safe is safe enough" (Derby and Keeney 1981, Fischhoff et al. 1981). It is influenced by social, political, and personal values as well as biological and physical realities. Acceptable levels do not necessarily make all stakeholders happy. Some team members and stakeholders will opine that for some measures only zero or near zero levels are acceptable. This very risk averse, or conservative, posture is based on the belief that large uncertainties and possibly irreversible consequences need to be minimized.

Acceptable limits apply to minimum allowable as well as desired levels. Ask the team to think why this (these) level was chosen.

What would be the consequences or falling on either side of this level?

How conservative is this level? Were the team members or the policy-makers too risk averse in setting acceptability at this value?

If the team wants to revise the level of acceptability, document why they do.

Disagreement about acceptable levels may be high. Determine if differences in the extremes of the opposing experts or stakeholders would intersect with where predicted values fall might fall. The team may not have to worry about the disagreements if the predicted values satisfy all competing levels.


Ask the team to build an consequence "instrument panel", analogous to an automobile dashboard panel that informs you whether vital processes in your auto are within acceptable ranges. Under this analogy, you imagine a display of instruments, each describing a global range, an acceptable range, and a range created by the activity. Follow these instructions:

(1) For each measure, establish a global range from the lowest to the highest (worst to best?) possible values.

(2) For each global range, establish an acceptable range. This range will reflect the team's judgment about the thresholds. The width of the acceptable range indicates the range of tolerance for the measure.

(3) Estimate the range of values (consequences) that will be expected to result from the alternative. Establish this range as a confidence interval within which you would be accurate, e.g., 95% of the time.

(4) Estimate the extremes (worst and best) consequences of the range first, followed by the most likely outcome. Think about the state of information about the measure and the alternative. Extend your range if the natural variability is great or you are highly uncertain about the consequences.

CONSEQUENCES Question 8. What are the cause-effect relationships between the proposed activities and the consequence measures?

Many if not most of the causal descriptions will have been started in PROBLEM Question 8. This expands the range of measures and asks for more detailed description of the linkages to not only the objectives measures but also any measures of consequences not accounted for in the objectives (so-called side effects).

Get people thinking. Draw out relationships on the blackboard or easel to create a lively interchange. Some may be reluctant to commit to a display if they feel they know too little. Emphasize that the inexactness they feel should be identified in these diagrams.

Start by drawing familiar "cycles" such as fire, soil, water, insect/disease, human demographic, and other systems. Challenge the team to bring these cycles together in an integrated scheme, showing where they intersect to cause consequences. It may be difficult to capture everything that is being discussed so have the team and review each of the components and influence "arrows" to write out how xxx influences yyy.

Encourage debate. Ask one person or subgroup to trace through the causal relationships with one explanation; another to offer an alternative trace. Let them debate and provide scenarios in which both explanations could be true and important.

Post the causal diagram at each of the team's sessions. Refer to it frequently. Use it to direct questions and as a map to help sort out confusing situations


One approach to specifying interactions is to build an interactions chart:

(1) List activities across the top of a flip chart page.

(2) List measures or attributes across the bottom.

(3) Connect the activities to the measures for which there are possible consequences.

(4) Designate for each activity-measure link the direction of the effect - positive (+) or negative (-).

(5) Designate what the portion of the total effect on a single measure that would come from each activity-to-measure arrow. For example the negative effect of a set of activities on fish habitat may come from salvage (-40%), roading (-40%), prescribed burning (-20%) for a certain estimated consequence of a reduction in 10 units of habitat, but there may be a compensating influence by reforestation (+90%) for a net interactive effect of about -10 units.

Alternatively, try one of the more structured approaches for describing causal patterns. These include "if-then" rulebases, influence diagramming and Bayesian belief networks (Schacter 1986 and 1987), soft systems diagrams (Checkland 1981), system cycles diagrams (Senge 1990), fault and event trees (Sundararajan 1991), probabilistic scenario modeling (Russo and Schoemaker 1989), and scenario planning techniques (Wack 1985, Geus 1986).

CONSEQUENCES question 9. What will the values of the measures be if the proposed activities in the alternative are implemented?

This is where the specialists or outside experts may have to rely on their judgment, models, extrapolations, or other sources of information to make their predictions.

Ask the team, or the appropriate specialists to express predictions as ranges of values or discrete outcomes, or even scenarios. Assign likelihoods to different values to capture their level of uncertainty.


Capturing the expert's thinking while he/she is making the predictions is important. Refer the expert to the cause-effect relationships and have him/her "think aloud," tracing a path through the diagram. Help the expert(s) break the judgment down into component events that are more frequent and more easily predicted.

A popular expert judgment approach is the expert panel. Group interaction provides feedback and learning, and the opportunity to average out compensating biases. The downside of panel predictions is the tendency to "groupthink" - the tendency for group members to agree because they want to be cooperative or fear reprisals (Janis 1972). Panels of the same discipline can be very prone to "groupthink".

Facilitation that encourages candid questioning can help avoid group biases. Individual predictions, especially those organized to elicit the degree of uncertainty in expert's knowledge, can be aggregated to obtain averages that can be weighted by factors such as years of experience and performance in similar judgment tasks.

Manage panel assessments to take advantage of the wide range of knowledge and experience. Assemble panels to compensate for the biases of some experts with contrasting biases of others. Direct intrapanel communication at an exchange of insights rather than pushing toward some consensus of views.

Guidelines for organizing and eliciting expert opinion include:

* Be patient and impartial; understand the decision thoroughly but do not advocate any alternatives.

* Create an environment where the experts can focus on the task. Remove distractions and influences that could bias the expert's thinking. Relieve fears. Less experienced team members may be reluctant to exercise their professional judgment. Emphasize that there are no wrong answers. It is acceptable to be completely uncertain about any prediction.

* Define the measure or scale to be predicted as specifically as possible. Involve experts in the definition process. Experts should agree on the definition and be able to describe the element in their own words and examples. Check for completeness and clarity in outcome definition.

* You may require each expert provide his/her degree of certainty for each estimate. This can be expressed as a range of values or some numerical or verbal expression of his/her confidence in the estimate. For example, an expert could predict trees per acre lost in a severe epidemic + or - 10 trees at 90 percent confidence.

* Don't discourage experts from expressing high levels of uncertainty. A wide range in their predictions is useful piece of knowledge that should be considered in the decision process. Low levels of confidence (wide range) are not necessarily bad in that they might reflect a true lack of predictability. Conversely, a high level of confidence (narrow range) is not necessarily good, since many people can be quite confident without being accurate.

* Don't waste the experts' time and efforts. Assess only the measures that are most important to the decision. Isolate these critical elements by sensitivity testing the causal models before beginning the assessment process.

* Encourage experts to explain their rationale and to consider extremely likely and unlikely outcomes. Ask for estimates of extremely unlikely outcomes first, and then obtain estimates for more likely outcomes. People tend to "anchor" on initial estimates unless they are assisted in recognizing the true variability in outcomes.

* Check experts against any data that might be available about similar measures or events. Ask experts to reconcile differences between their assessments and historical records.

* Present extreme outcomes, asking experts to imagine that they have already occurred and to explain why. Pay attention to how outcomes are sequenced. People often overestimate or underestimate values following extreme outcomes because they forget the normal pattern is for the next observation to be closer to the average.

* Watch for the tendency to overestimate the likelihood of several values occurring together.

For example, the conjunction of a bad fire season, an insect outbreak, personnel shortages, and a budget shortfall may appear to be a vivid and plausible scenario, but the likelihood of all these events occurring together is quite small. Scientists who have been trained to look for connections among elements in ecological or economic systems can sometimes mistake largely random coincidences for patterns.

* Listen for words such as "inevitable," "ultimately," "should have (known)," and "looking back." These are clues of hindsight judgment. Be ready to produce contrary evidence, scenarios, and data that disagree with past experiences and challenge preliminary judgments.

* Rare events are hard to assess because there are little to no data about their occurrence, and human experts have little experience with the event. Judgment is difficult because it calls for distinctions among very small probabilities. There are several techniques for decomposing rare event scenarios into sets of sub-events. The probability of the sub-events and sequences are multiplied to get an overall likelihood of the event occurring.

More information on assessment processes for subjective judgments are described in Merkhofer (1987), Myer and _____ (1991), Spetzler and Stael von Holstein (1975), and Cleaves (1994).



deBono (1994) : 71-83. Information and thinking.


Bazerman (1986) : 192-203. Improving information, judgment, and feedback.

Finkel (1990). Methods for estimating and displaying uncertainty as a decision criterion

Morgan and Henrion (1990): 47-99. Nature and sources of uncertainty, probability and statistical estimation; 307-323 value of information.


Carpenter (1995). Questions to be answered in communicating uncertainty.


Dawes (1988): 128-129. Scenario thinking.

Geus (1988). Scenario planning for organizational learning.

Wack (1985). Scenario development in strategic planning.


Clemen (1996): 243-458 Value of information in decision analysis.

Dawes (1988): 256-273. Valuation of dealing with uncertainty.

Russo and Schoemaker (1989): 65-116. Pitfalls and improvements in information gathering.


Keeney and Raiffa (1993): 31-65 Structuring decision problems, objectives, and attributes; 354-435 application examples.

Keeney (1992): 1-152 Values-focused decision structures

vonWinterfeldt and Edwards (1986): 163-200; 205-257. Value measurement


Derby and Keeney (1981): Acceptability and the role of standards in making tradeoffs.

Fischhoff et al. (1981). Review of risk acceptability concept.

Glickman and Gough (1990). Compilation of papers on risk analysis, management, and communication.


deBono (1994): 35-51. Using perceptions and patterns in modeling causal relationships.

Jones (1995) : 83-94. Causal flow diagramming; 207-277 probability trees.

Schachter (1986) and (1987). Influence diagraming as a way to portray causal patterns.

Senge (1990): 68-126. Using system diagrams, reinforcing and balancing cycles to understand causal processes; 373-390 system archetypes.


Adams and Hairston (1994). Using scientists to provide information to decision makers.

Cleaves (1994). Methods for assessing uncertainty in expert judgments.

Clemen (1996): 219-333 Probability basics and subjective probability estimation.

Dawes (1988): 200-227. Forecasting and prediction difficulties and corrections.

Dunn (1981): 140-218. Forecasting policy alternatives.

Kleindorfer et al. (1993): 67-114. Review of research findings on prediction and inference.

Merkofer (1987). Methods for quantifying uncertainty.

Morgan (1993). Overview of risk analysis and state of the practice.

Morgan and Henrion (1990): 102-168. Assessing uncertain quantities, eliciting expert judgments; 220-252 communicating results.

Myer and _____ ( 1991) . Practical approached to eliciting and analyzing opinions and predictions from experts.

Spetzler and Stael Von Holstein (1975). Protocol for eliciting probability estimates from experts.

Suundararajan (1991). Fault trees, event trees, and other reliability engineering techniques.

vonWinterfeldt and Edwards (1986): 90-1133. Probability assessment and biases.


Adams, P.W. and A.B. Hairston. 1994. Using Scientific Input in Policy and Decision Making. Oregon State University Extension Service. EC 1441.

Bazerman, Max H. 1986. Judgment in Managerial Decision Making. John Wiley & Sons, New York. 195pp.

Carpenter, R.A. 1995. Communicating environmental science uncertainties. The Environmental Professional 17:127-136.

Checkland, P. 1981. Systems Thinking, Systems Practice. John Wiley and Sons, Chichester. 300p.

Cleaves, D.A. 1994. Assessing uncertainty in expert judgments about natural resources. Gen. Tech. Report SO-110. U.S. Department of Agriculture, Forest Service, Southern Forest Experiment Station. New Orleans, LA.

Clemen, Robert T. 1996. Making Hard Decisions: An Introduction to Decision Analysis. 2nd Edition. Duxbury Press. Belmont, CA. 664 p.

Covello, V.T., D. von Winterfeldt, and P. Slovic. 1986. Risk communication: a review of the literature. Risk Abstracts. 3:171-182.

Cross, F.B. 1994. The public role in risk control. Environmental Law 24: 821-969.

Dawes, Robyn M. 1988. Rational Choice in an Uncertain World. Harcourt Brace Jovanovich, Orlando, Florida. 346 pp.

deBono, Edward. 1994. deBono's Thinking Course, Revised Edition. Facts-on-File, Inc. New York. 196 p.

Derby, S.L. and Keeney, R.L. 1981. Risk analysis: understanding 'How safe is safe enough?'. Risk Analysis 1(3): 217-224.

Dunn, William N. 1981. An Introduction to Public Policy Analysis. Prentice Hall, Inc. Engelwood Cliffs, NJ. 388 p.

Evans, James R. 1991. Creative Thinking in the Decision and Management Sciences. South-Western Publishing Company. Cincinnati, OH. 167 p.

Finkel, Adam M. 1990. Confronting Uncertainty in Risk Management: A Guide for Decision-Makers. Resources for the Future. Center for Risk Management. Wash. D.C. 68pp.

Fischhoff, B., S. Lichtenstein, P. Slovic, S. Derby, and R. Keeney. 1981. Acceptable Risk. Cambridge University Press.

Fisher, Roger, William Ury, and Bruce Patton. 1991. Getting to Yes: Negotiating Agreement Without Giving In. 2nd Edition. Penguin Books. New York. 200 p.

Geus, A.P. 1988. Planning as learning. Harvard Business Review. March-April:70-74.

Glickman, T.S., and M. Gough, eds. 1990. Readings in Risk. Resources for the Future, Washington, DC.

Howard, Ronald A. and James E. Matheson. editors 1983. Readings on the Principles and Applications of Decision Analysis. Menlo Park, CA. vol's. I&II. Strategic Decisions Group. 951 pp.

Janis, I.L. 1972. Victims of Groupthink. Houghton Mifflin. Boston.

Kahneman, Daniel, Paul Slovic and Amos Tversky, editors. Judgment under Uncertainty: Heuristics and Biases 1982. Cambridge Univ. Press. 555pp.

Jones, Morgan D. 1995. The Thinker's Toolkit: Fourteen Skills for Making Smarter Decisions in Business and in Life. Random House. 348 p.

Keeney, R.L. 1992. Value-focused Thinking. Harvard University Press. Cambridge, MA.

Keeney, R.L. 1983. Issues in evaluating standards. Interfaces 13:12-22.

Keeney, Ralph L. and Howard Raiffa. 1993 Decisions with Multiple Objectives: Preferences and Value Tradeoffs. Cambridge University Press. G.B. 569 p.

Kleindorfer, P.R., H.C. Kunreuther, P.J.H. Schoemaker. 1993. Decision Sciences: An Integrative Perspective. Cambridge University Press, New York.

Merkhofer, M.W. 1987a. Quantifying Judgmental Uncertainty; Methodologies, Experiences, and Insights. IEEE Transactions on Systems, Man, and Cybernetics. 17(5): 741-7

Morgan, M.G. 1993. Risk analysis and management. Scientific American. July p 32-41.

Morgan, M.G. and M. Henrion. 1990. Uncertainty: A Guide to Dealing with Uncertainty in Quantitative Risk & Policy Analysis. Cambridge University Press. Cambridge MA52.

Meyer, M. and _______1991. Eliciting and Analyzing Expert Judgment: A Practical Guide. Academic Press.

Russo, J. Edward and Paul J.H. Schoemaker. 1989. Decision Traps: The Ten Barriers to Brilliant Decision-Making and How to Overcome Them. Simon and Schuster, New York. 280 pp.

Sandman, P.M. 1985. Getting to maybe: some communication aspects of siting hazardous waste facilities. Seton Hall Legislative Journal 9:442-465.

Senge, Peter M. 1990. The Fifth Discipline: The Art and Practice of the Learning Organization. Doubleday/Currency, New York. 424 p.

Schein, Edgar H. (1987). Process Consultation Volume II: Lessons for Managers and Consultants. Addison Wesley Publishing Co. Reading, MA. 208 pp.

Shachter, R. 1987. Probabilistic inference and influence diagrams. Operations Research. 36: 589-604.

Shachter, R. 1986. Evaluating influence diagrams. Operations Research 34:871-882.

Sinden, John A., and Albert C. Worrell. 1979. Unpriced Values: Decisions Without Market Prices. John Wiley and Sons. New York. 511 p.

Slovic, P. 1986. Informing and educating the public about risk. Risk Analysis 6(4): 403-415.

Spetzler, Carl S., and Carl-axel S. Stael Von Holstein. 1975. Probability encoding in decision analysis. Management Science. 22(3):340-358.

Sundararajan, C.R. 1991. Guide to Reliability Engineering: Data, Analysis, Applications, Implementation, and Management. Van Nostrand Reinhold, New York.

Schwenk, C. and H. Thomas. 1983. Formulating the mess: the role of decision aids in problem formulation. Omega: The International Journal of Management Science 11(3):239-252.

von Winterfeldt, Detlof and Ward Edwards. 1986. Decision Analysis and Behavioral Research. Cambridge Univ. Press. Cambridge G.B. 604 pp.

Wack, P. 1985. Scenarios: uncharted waters ahead. Harvard Business Review. 85(5): 73-89.