An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem.
Monitoring will ensure that the
Northwest Forest Plan is implemented as intended, determine whether
the plan is achieving its intended objectives, and provide information
needed to improve the plan. The credibility of the Forest Plan
rests on the credibility of its monitoring program. If the monitoring
program is to succeed, it must focus on well-defined objectives
and use monitoring strategies that are appropriate for the specific
questions being asked. Monitoring must be planned using a fundamental
understanding of the system to be monitored, and the specific
applications for the monitoring results must be identified early
so that appropriate levels of precision and consistency can be
selected. Because resources for monitoring are limited and results
are to serve multiple purposes, objectives for monitoring must
be prioritized and set by interagency needs using information
from multiple scales of analysis, including regional assessments,
basin assessments, watershed analysis, and project analysis.
The call for monitoring pervades the Record of Decision and The Standards and Guidelines for Management of Habitat for Late-Successional and Old-Growth Forest Related Species Within the Range of the Northern Spotted Owl (USDA and USDI 1994a; the combined Record of Decision and the Standards and Guidelines are here referred to as the "ROD", and the strategy is known as the Northwest Forest Plan). Monitoring is the foundation for adaptive management and the means of ensuring that the Northwest Forest Plan will be responsive enough to succeed. The ROD promises that each aspect of the plan will be monitored to determine whether it is carried out and whether it is effective. Unfortunately, monitoring is harder to carry out than to promise. The ROD has raised the expectations for what federal agencies will accomplish, and unless a well-thought-out plan for implementing the ROD's promises is developed soon, it is likely that the failure to meet monitoring objectives will figure prominently in future confrontations. To understand how these promises might be met, we must examine the attributes of successful monitoring programs and explore ways that the analysis structure set out by the ROD can be used to develop such programs.
Meanwhile, occasionally heard above
the call for increased monitoring is another voice with a very
potent message: we already know a lot about how ecosystems work,
so let's dispense with the monitoring and go ahead and do what
we know is right. We know that sediment hurts fish, and we know
how to decrease sediment inputs. Why not spend the monitoring
dollars to solve the problems instead of just studying them? Isn't
monitoring just an excuse to postpone changing the way land is
managed? Are we documenting the extinction of species instead
of preventing it? This argument is such a powerful one because
it is based on experience. A variety of damaging practices have
been maintained in the past using the argument that "Mother
Nature is so variable that we cannot detect the small changes
we might be responsible for; we are still well within the range
of natural variability. We'll continue to monitor, and if we do
eventually detect a statistically significant change, we'll consider
changing our practices then." Coupled with the principle
that it is always possible to design a monitoring program to avoid
statistically significant results, the argument results inevitably
in perpetuation of the status quo. Why, then, is monitoring such
an important component of the Northwest Forest Plan?
The ROD explicitly states that "Monitoring is an essential component of the selected alternative" (ROD p.57), and that it will be used to "provide information to determine if the standards and guidelines are being followed...verify if they are achieving the desired results...and determine if underlying assumptions are sound" (ROD p.57). Monitoring results are to provide the basis for altering the provisions of the ROD. Monitoring "will be conducted at multiple levels and scales" (ROD p. E-1), and the program "will be coordinated among agencies and organizations to enhance the effectiveness and usefulness of monitoring results" (ROD p.58). The Research and Monitoring Committee is assigned the task of developing and implementing an interagency monitoring network (ROD p. E-12).
Different components of the Northwest Forest Plan have different monitoring needs. First, the overall plan is based on a hypothesis that species viability can be protected adequately by setting aside blocks of land for special environmental protection. The validity of this hypothesis will become apparent over ensuing decades as populations of species either increase, remain static, or decline. The overall success thus must be gauged by a long-term monitoring program. To determine whether the hypothesis is valid also requires information about how conscientiously the strategy was implemented; a plan may fail not because it was invalid, but because it was never carried out. Early observations suggest that some aspects of the Northwest Forest Plan are being ignored. Thus, few basin assessments have been done, even though these are a prerequisite for effective watershed analysis. Any assessment of the success of the Forest Plan must include an evaluation of how much of it has actually been implemented, and this requires implementation monitoring at every level.
Adaptive management is a fundamental component of the Northwest Forest Plan, and monitoring is a fundamental component of adaptive management. Adaptive management is a strategy for designing new management protocols whereby a system is managed under the guidance of a testable hypothesis, and management takes place in such a way that the validity of the hypothesis can be disproved. Testing is generally done by monitoring the effects of the management and comparing them to the results predicted by the hypothesis. Monitoring is also important for understanding why an adaptive management experiment didn't work. On the Klamath River, for example, careful monitoring after an experimental increase in escapement for fall and spring chinook disclosed that the reason that populations did not increase markedly was not that the enhanced escapement had no effect, but that 25% of the redds were desiccated when the Bureau of Reclamation decreased dam releases after the eggs were in the gravel (Robert Franklin, Hoopa Tribe, personal communication). In the Adaptive Management Areas designated by the ROD, there is an expectation that the required monitoring will provide jobs for local residents (ROD p. D-9) and thus contribute to maintaining the economic stability of rural communities.
Central to the plan for maintaining ecosystems are the standards and guidelines for the various land-use allocations (e.g. Riparian Reserves, Late-Seral Reserves, and so on). These, too, must be tested both to determine the extent to which they are being implemented and to evaluate their effectiveness in achieving the desired goals. For example, fire supression efforts are so far notable in their disregard for the ROD's standards and guidelines, but it will be necessary to determine whether the disregarded guidelines would have been effective in meeting their intended objectives, had they been implemented as required.
The ROD also identifies specific needs for monitoring of environmental disturbances such as fires and management-induced habitat fragmentation (ROD p. E-10), and for monitoring "the type, number, size, and condition of special habitats over time" as surrogate indicators of the status of rare and endangered species (ROD p. E-11).
Another goal for monitoring in support of the Northwest Forest Plan is to detect problems at a point that they can be effectively redressed. This goal is not explicitly outlined by the ROD, but it is implicit in the general desire to make the Northwest Forest Plan work. To this end, simply monitoring populations is not as useful as monitoring factors that will influence populations. If populations alone are monitored, it may be too late to address the problem by the time a significant downward trend is identified.
In short, monitoring is an important
component of the Northwest Forest Plan because it is an important
tool for making the plan work. To work, the plan must be flexible
enough to address local problems, and monitoring provides the
information needed to flex. To work, the plan itself must be credible,
and monitoring provides the means for testing the credibility
of the plan and allowing the plan to be modified to improve its
credibility. But if monitoring is going to support the credibility
of the Northwest Forest Plan, the monitoring itself must be credible.
The call for monitoring is a controversial
topic in part because there are so many excellent examples of
bad monitoring projects: the world is rich with monitoring projects
that failed to achieve their intended goals. Each of these holds
a lesson for how to do it right, as do the projects that have
actually succeeded. Examination of some of the successes and failures
shows some distinct patterns.
Successes and failures
Graduate students have a penchant for monitoring channel cross sections to detect environmental changes. Although some of these projects show results that are useful, those using cross sections established in bedrock-lined channels are categorically doomed to fail. The moral: one must select a monitoring parameter that is an appropriate indicator for the types of things one is interested in. To do this, it is necessary to have a fundamental understanding of how the particular system works. One needs to know what parameters are likely to change in what ways over what period of time at what sites.
In contrast, the channel cross sections monitored by Redwood National Park have contributed to the understanding of how that system is likely to respond to future large storms. Aggradation of Redwood Creek since the 1964 storm was a major reason for the park expansion. In 1973, 60 cross sections were established along 100 km of channel, and these are resurveyed annually. These data are being used to track recovery of the channel and to determine where future problems might arise. The data were gathered at great expense over a long period, and early termination of the program would have resulted in uninterpretable information. The moral: a successful monitoring program needs a high level of commitment. One needs to know the length of record necessary to provide interpretable results and to establish at the outset the long-term plan for achieving that record.
Monitoring of spotted owls provides a partial success story: by the time the program ended, the Forest Service had a very good idea of how many owls were where. Unfortunately, population trends are difficult to interpret because monitoring protocols were altered as the program proceeded. In addition, realization grew as the project proceeded that owls live long enough that current population, by itself, is not a particularly sensitive indicator of future population. Information about the age structure of the population would have provided a more reliable indicator. The moral: pilot studies for testing both monitoring methods and data analysis strategies are essential.
Some "monitoring" successes are serendipitous. Revisiting the locations at which historical snapshots or aerial photographs were taken can provide valuable information about long-term vegetation changes or changes in channel conditions. This was the approach used by Gruell (1983) to track changes in vegetation patterns between 1871 and 1982 in the northern Rocky Mountains. In essence, a brief photo-interpretation exercise with associated field verification took the place of a 110-year monitoring program. The moral: if one has a well-defined question, then one may be able to find easy ways of answering it that do not involve monitoring as it is commonly understood. Monitoring is simply a tool for answering a question, it is not an objective in its own right.
Looked at as a tool, monitoring has been successful for many narrowly defined applications. In these cases, the particular issue of concern is used to identify the type of monitoring, the location, the attributes, and the timing required. Monitoring of air quality in northwest California is triggered by citizens' complaints. Air quality is thus monitored only when it seems to matter, and samples are taken in such a way as to maximize their likelihood of detecting the particular type of problem of concern at the time. Full-time monitoring for a variety of likely problems over a wide area would provide less useful results at a much higher cost. However, this approach cannot be used to prevent the complaints in the first place, except by the indirect route that if those out of compliance are consistently apprehended, they will make a greater effort to remain in compliance. The moral: the most useful monitoring projects are tailored to answer specific questions, but it must be assured that those questions are relevant to the long-term goals.
Monitoring of neotropical birds presents a variety of challenges, some of which have been successfully met and others not. The most wide-spread monitoring has been carried out on the basis of sightings, and the annual Christmas bird counts provide a growing length of record for over 2000 locales in northern California. The success of this program lies not only in the data provided, but in the awareness it creates among the communities that participate. Such monitoring can answer questions about long-term trends in the distribution and abundance of various species at the end of December. As long as data are not used to answer inappropriate questions, all is well. However, observations must be interpreted in the context of what is likely to be seen. Observational censuses have thus been demonstrated to misrepresent the distribution and abundance of species during the breeding season, as sub-adults are much less likely to be seen than their mature and singing counterparts. In this instance, mist net data may provide more reliable information for specific habitats. The moral: methods must be selected that are capable of answering the question of interest.
Some of the best examples of successful monitoring programs are the longterm stream gauging carried out by the US Geological Survey and the climatic monitoring carried out by the National Weather Service. In both cases, data are of demonstrable utility to economic interests and to public safety, and there are extensive sub-institutions designed specifically to collect data, ensure data quality and consistency, and distribute results in a useful form. The value of the records increases disproportionately with the length of the record, and this provides an incentive to continue collecting data. Recently, however, budget cuts have resulted in closing many river monitoring stations. Advances in modeling methods were considered to have made some of the gauges redundant, and records were already long enough to describe flow-duration curves at many stations. Because the specific questions the gauging data had been used to answer were now being answered in other ways, there was not as much incentive to maintain the record. The morals: collection of information of direct relevance to public health and wealth will be supported by a powerful constituency; it is becoming increasingly difficult to justify gathering data for data's sake; and monitoring may not be the most efficient way of answering the question.
At the other extreme are several
agency inventory projects that are intended to provide a baseline
for future monitoring projects. Inventories were first implemented
for strictly utilitarian reasons: agencies needed to know how
much marketable timber they managed and where it was; and they
needed to know where the erodible soils were. Through time, however,
inventories were implemented with less well-defined goals, and
the result is a series of ecosystem unit maps and channel habitat
maps that are intended to provide background information for all
kinds of different applications, as yet to be defined. Because
they are not targeted to answer specific questions, these inventories
gather cursory information about many attributes that may prove
useful in the future. In many cases, the results are useless.
For example, one stream survey designed for monitoring requires
descriptions of more than 50 parameters, allowing only 100 meters
of channel to be surveyed in a day. The resulting sample size
is so small as to be useless for monitoring. The moral: if a useful
monitoring project is desired, it should be designed to answer
a useful question.
Patterns of success and failure
The studies described above show patterns in the attributes of successful and unsuccessful programs (Table 6-1). These patterns are further supported by some general observations of agency monitoring efforts and strategies.
Federal land management agencies have a less-than-perfect track record for designing and implementing monitoring programs. Past efforts at monitoring have made several precepts quite clear. First, it is far more useful to monitor a few things well than to spend an equal amount of effort to monitor many things haphazardly. Monitoring should be focused on a few key parameters so that the necessary field work is accomplishable and results are interpretable. No variable should be included in a monitoring program unless it is quite clear from the start how that information will be used: each variable must be carefully justified. Otherwise, the number of variables that "may be important" or "seem interesting" or "would just take a few more minutes to collect, since you're already there" can easily divert the focus from the few variables that are critical to the results. In addition, the major data analysis problem faced by federal land-management agencies is not their lack of environmental data but their abundance of unanalyzed data. The more effort is put into data collection, the less goes into analysis and interpretation.
Related to this problem are those arising from the institutional infatuation with broad-scale consistency. Quite often, a monitoring protocol has been developed and applied everywhere, irrespective of its relevance to the questions important at different sites. A regional monitoring program that requires uniform inventory of the same six channel parameters may be useful for demonstrating that the Skagit River has more woody debris than the South Fork of the Trinity, but fails to produce any useful information about the status of amphibian habitat at either site. Such consistency serves bureaucratic needs rather than resource needs.
Table 6-1. Attributes of a successful
Statisticians are involved in planning the program from the earliest stages
There is an institutional commitment to completing the program
The overall program has a well-defined objective
The monitoring strategy is designed to achieve the objectives of the program
The study is designed to answer a narrowly defined question
The study is designed using prior knowledge of:
what will change
where it will change
how much it will change
when it will change
Monitoring parameters are appropriately sensitive to expected change
Methods other than monitoring may be used if they are more efficient for answering question
Monitoring methods for each study are designed specifically to answer the question posed
Monitoring protocols are consistent through the duration of a study
A detailed plan for analyzing the data is developed and tested before monitoring begins
All aspects of the monitoring plan are tested during an initial pilot study
There is a clear tie between results and user needs; results will provide useful information
A mechanism is included for communicating and applying the results
Another problem is the focus on variables that cannot be interpreted. Fish populations are important, for example, but population data are not interpretable in isolation. If a decrease in population is noted, it is too late to serve as an early warning to help prevent damage to fish stocks, and it tells nothing about why that change has occurred. Projected changes in ocean conditions suggest that ocean survival may increase over the next few years, and resulting population increases may well be taken to "demonstrate" that management plans are successful. Whenever a variable is selected for monitoring, it is essential that those collecting the information know from the start exactly how the data are to be analyzed, what additional information is needed to interpret the results, and what type of interpretations are possible.
Similarly, anadromous fish may not be the best species to select even as an indicator of past impacts. These species are subjected to such a variety of influences in the ocean, that populations will not give a clear signal of the influence of altered freshwater habitat. Instead, resident "trash" fish, because the primary influences on their populations arise from the freshwater habitat, may be far more sensitive indicators.
Interpretation of results is also difficult because different parts of physical and biological systems respond differently to a single type of environmental change. A particular land-use practice may cause aggradation in some parts of a channel system and incision elsewhere, and parts of the aquatic community will be aided by aggradation while other parts suffer. It thus is extremely important to look at trends of change throughout the affected system. A changing parameter at a single site cannot be interpreted without understanding the broader context for that change.
A recent trend in agency monitoring programs has been to design programs to determine whether sites are "healthy" or not. This approach probably arose in an attempt to copy successful EPA-style compliance monitoring programs. With this approach, agency specialists describe "objectives" or "desired conditions" for entire systems, and these are adopted as the standards against which compliance can be judged. Thus, if a 50-ft-wide stream west of the Cascade crest has 26 pools and more than 80 pieces of wood per channel-mile, a temperature of less than 68o F, more than 80% stable banks and a width-depth ratio of less than 10, it is deemed "healthy". Reality, unfortunately, is more complicated. If a natural channel system is redesigned--as it must be, since natural channel systems do not adhere to these arbitrary criteria--to fit this designer's concept of what a channel should look like, it is likely to be far less habitable to the target species (anadromous fish) than the original channel was. In a natural channel, unconstrained reaches are hot spots for deposition. They have warm water and high biological productivity, and are choice locations for spawning and rearing of anadromous fish. Upstream reaches may be colder than optimum for these uses, but they temper the temperature in the downstream reaches. A temperature of 65o everywhere in these systems is impossible and undesirable, yet the human conceit that it is possible to reduce complexity by edict has persisted, and this set of objectives has actually been adopted in management prescriptions (USDA and USDI 1994b). The monitoring task then becomes a simple one of EPA-style compliance monitoring: is there enough wood in the stream?
More appropriate would have been
a directive to ensure that different parts of the system are performing
adequately to maintain the integrity of the whole, but this approach
would require a more sophisticated treatment and so was not adopted.
To manage--and monitor--appropriately, we must think like ecologists;
we must recognize the functions and processes that make up an
ecoscape. Why should we expect all channels to look alike all
the time, when that is quite obviously not how the natural world
looks? Many of the most productive sites in a channel system exist
because they were devastated by a catastrophe at some time in
the past. We must start looking at the potential of systems rather
than being blinded by their current conditions; we must recognize
that these catastrophes are necessary to set the conditions for
future prime habitats. A site that looks terrible now may be in
the necessary first phase for setting up ideal conditions in the
future. A monitoring program designed to assess whether an ecoscape
is functioning "normally" is very different than one
designed to count logs in streams.
The dictionary defines "to
monitor" as "to watch or check on", though in common
technical usage "monitoring" usually implies a series
of observations over time (MacDonald et al. 1991). Most considerations
of monitoring leap straight from "we need to monitor"
to "what technique should we use?" without ever
determining what the goal of the monitoring program is, what strategy
is appropriate, or what level of precision is necessary to attain
the goal. It may be helpful to consider steps that several of
the successful programs described above went through as the programs
Objectives for monitoring
Whether a monitoring program is
successful or not depends strongly on the nature of the question
it is intended to answer, so great care must be taken to ask an
appropriate question. A statistician can help to fine-tune the
study objectives and identify monitoring strategies that are appropriate
for meeting those objectives. In the words of Ken Roby (Plumas
National Forest, personal communication), "Feeling inadequate
and stupid with a statistician prior to sampling is preferable
to feeling inadequate and stupid with a statistician after sampling."
Table 6-2. Commonly cited objectives
for monitoring. Types of monitoring listed under "comments"
are defined according to MacDonald et al. (1991)
|Of large events||long-term; accuracy more important than consistency, so improved methods are incorporated as developed||National Weather Service rainfall records used in flood forecasting|
|Of detrimental trends||long-term; consistency as important as accuracy||Atmospheric CO2; Christmas bird counts|
|Evaluate effectiveness of a practice or method||timing and attributes keyed to knowledge of response mechanism; may be short-term; usually is effectiveness or validation monitoring||USFS BMPEP|
|Test hypotheses of associative or causal relations||timing and attributes keyed to hypothesis and knowledge of response mechanism; may be short-term||Many research experiments|
|Was action carried out?||implementation monitoring; timing keyed to timing of activity, attributes to wording of regulations; long-term. If standards defined by implementation, may be same as compliance monitoring.||County building permit inspections|
|Was goal attained?||compliance monitoring; attributes keyed to wording of regulations, timing to knowledge of response mechanism and timing of activity; long-term||EPA water quality|
|Define resource to facilitate planning:|
|Through time||baseline monitoring, usually short term||Stream gauging for reservoir planning|
|Through space||inventory, usually carried out once||Forest stand inventory|
|Describe something||not a valid objective; for what purpose is it to be described?||Many inventories|
|Compare areas||not a valid objective; for what purpose are they to be compared?||Many inventories|
Types of monitoring have been classified in a variety of ways (e.g. MacDonald et al. 1991 p.6-7; ROD p. E-1) with the intent of allowing generalizations about appropriate strategies for given objectives. Table 6-2 presents such a classification from the point of view of the question to be answered. Objectives include getting early warning of impacts, evaluating the effectiveness of a practice or method, testing hypotheses of association, policing regulations, and defining the temporal or spatial distribution of a resource to facilitate planning.
Definition of objectives is usually an iterative process. First, the people calling for the monitoring work must be quizzed to find out explicitly how the monitoring results are to be used. If results are intended to provide early warning of a change so that measures can be taken to avert it, a completely different monitoring strategy and level of precision must be adopted than would be if the results are intended to provide evidence to support fining a user who is out of compliance with legislated standards. Monitoring for both early warning and for regulatory oversight require a long-term commitment for the monitoring program, since both become part of the way an agency does business.
In most of these applications, accuracy is more important than consistency either through time or in different areas. Thus, if a more effective method for predicting hurricane paths is developed, it is adopted and old methods discarded without regret. Only in the case where the recognition of a long-term trend is important is consistency of primary importance. In this case, new methods generally are adopted only if their correlation to the old method can be carefully defined. Consistency is also important in compliance monitoring, and maintenance of consistency in monitoring is thus a major concern of the Environmental Protection Agency.
Other monitoring objectives imply shorter-term commitments because they are targeted toward answering a specific question. Monitoring can be discontinued once enough data are available to answer the question. This is usually the role of monitoring in research studies and in testing the effectiveness of specific land-management practices. Both of these applications involve hypothesis testing, and the monitoring strategy used is that which can most efficiently disprove the hypothesis. In this case, consistency is important within a study so that results can be interpreted, but each application must develop the monitoring protocol that is best suited to answering the question posed.
"Describing something" and "comparing areas" are often given as the reason for a monitoring program, but these are not useful objectives and will almost always result in an unsatisfactory program. If either of these is put forward as an objective it will be necessary to determine what the purpose of the description or comparison is. Specifically, how will the information be used? Who will use it? Such questioning will usually disclose an underlying reason that falls into one of the categories described in Table 6-2. If the intent turns out to be to test a hypothesis, then it is critical that the hypothesis be stated very clearly at this point.
Similarly, if the intent is to facilitate future management, then the specific management options that may be applied must be identified. In many cases it will then become apparent that a sampling approach may be more efficient and provide more useful information than either monitoring or inventory. Sampling is a strategy for providing a statistical description of distribution or attributes, but which requires much less time and effort than inventory. Sampling, though more efficient, requires a more sophisticated level of planning than does inventory. Sampling must be designed to be statistically valid, and preliminary studies are often used to disclose broad patterns that allow stratification to increase the efficiency of the sampling program. Ideally, such groundwork should also be done for full inventories, but is rarely carried out.
Arguments are often made that inventories have intrinsic values, that they provide information that may be needed to answer questions that have not yet been asked. This point of view tends to lead to inventories that describe multitudes of attributes poorly. Only if there is a particular application planned for inventory results will there be any guarantee that results might be useful for something. Agency and academic cultures ensure that undedicated data will not be preserved. Photos have been taken of Sacramento River levees for 50 years now, and they provide a useful monitoring record of riparian vegetation changes over that period. The photos were taken for other, very specific reasons, however, and it was only because they satisfied those reasons very well that they were preserved. Yes, we do not know what questions will be important in the future, but there are so many things possible to measure that it is unlikely that we will hit on the one that will be important, or that the method we use will provide the information needed. We cannot plan serendipity; we can only see to it that we do the best possible job in measuring the things that we have specific causes to measure. Planning for the analysis and use of monitoring results is an integral part of the monitoring plan, since the intended use determines the form the results must take.
As an aside, we must also make an
effort to uncover the serendipities of the past before they are
lost. An effort put in now to documenting past data while people
are still alive will be more useful than wildly collecting new
data, and any suggestion that this effort is less important than
the data gathering is a good indication that non-dedicated data
gathered now will encounter a similar lack of respect in the future.
If we don't preserve past data, why should future agency personnel
be interested in ours? As data become more frequently stored on
computers, the likelihood that non-dedicated data will be used
in the future decreases. A dust-covered rolled-up map or file
folder is difficult to overlook forever, while even the most carefully
documented computer file is essentially invisible. A lesson might
be taken from the experience of the USFWS in their successful
attempt to document changes in pool frequencies in the Columbia
Basin. Data had been collected in the 1930s, but was uninterpretable.
By chance, researchers found the last possible person who could
decode the annotations 6 months before he died.
Strategies for monitoring
Appropriate strategies are determined to some extent by the goals of the monitoring program. If the program is intended as an early warning system to allow intervention before an impact is incurred, then attributes would be selected for monitoring which show changes before the impact of concern occurs. Thus, a staff gauge in one's living room is not an effective information source for preventing damage from flooding. If the intent is to reveal the relation between a cause and an effect, then both the cause and the effect must be monitored, and the living-room staff gauge may then prove useful.
The most important background for
planning an effective monitoring strategy is a strong understanding
of how the various components of the system to be monitored interact
with one another; it is important to understand how that system
"works". Even for the same types of disturbances in
the same system, responses will differ at different times and
at different locations. Each setting will have a different sensitivity
to change and a different lag-time for the expression of the change.
It is necessary to have some idea of what will change by what
amount in what parts of the system before appropriate methods
can be selected.
A variety of approaches can be taken to the problem of how to distribute one's monitoring efforts through time. Often it is assumed that monitoring must take place at regular intervals, such as occurs when channel cross sections are remeasured yearly during the low-flow season. More useful, however, is to select a time frame that is based on the expected trend and pattern of variation of the attribute being monitored and on the type of information needed to meet the monitoring objectives.
A regular sampling interval introduces the risk that monitoring results may be biased if a response is not randomly distributed through time. Sampling should thus be random in time if information about temporal response patterns is lacking. More often, however, a lot is known about the temporal pattern of responses, and this information can be used to "stratify" the sampling to reduce the variance. Annual cross sections work because they are made after the major disturbances have occurred each year; cross sections remeasured during the high-flow season would show a higher year-to-year variation because their timing relative to large events would differ each year.
Knowledge of the driving variables usually allows the efficiency of monitoring efforts to be increased. In the case of suspended sediment, for example, most of the action occurs at high discharges. If the goal is to use monitoring to estimate the total sediment load of a stream, then it is most important to measure concentrations during the highest discharges, and sampling frequency might be made proportional to discharge. On the other hand, if the intent of the program is to detect non-compliance to water-quality standards during engineering work, sampling intensity would be keyed to the timing of the engineering work.
The length of time required to provide an interpretable monitoring record depends in part on the variability of the attribute being monitored. If rare events are important influences on the attribute of interest, then the sampling period must be long enough to represent those rare events. In this case, year-to-year trends measured over short periods may represent random variation, or reflect a well-defined but atypical rate, or even show a trend opposite to the long-term trend. Either a lengthy monitoring period must be adopted, or surrogate information must be used to extend the record backward through time, or a different attribute must be selected for monitoring.
Lengthy monitoring periods are also
necessary if the response is expected to be slow. Thus, ecological
effects of a silvicultural treatment will not be fully understood
until multiple cutting cycles have taken place, and this may require
over a hundred years. In this case, three strategies may prove
useful. First, some information about some aspects of the response
may be gleaned by retrospective studies of "experiments"
that occurred in the past. Thus, the recovery rate of soil bulk
density after a road is abandoned may be evaluated by measuring
bulk densities on roads abandoned at different times in the past;
monitoring is not even necessary. Second, experiments might be
done to reveal the cause of an aspect of the response. If the
timing of the cause is known, then the response can be predicted
through time. In this case, monitoring may be necessary only for
the period needed to evaluate the causal relation. Modelling can
then take the place of long-term monitoring. Third, sampling for
a short time over very wide areas can sometimes take the place
of sampling for a very long time over a small area. This substitution
of space for time can be useful if large events occur frequently
somewhere in a region, but reoccur infrequently at any particular
The variety of possible strategies for arraying monitoring locations is similar to that for the temporal distribution of monitoring. In this case, too, in the absence of other information, sampling must be randomly distributed if the assumptions on which statistical analysis is based are to be satisfied. However, in this case, too, other information is almost always available, and sampling can be arrayed in such a way as to greatly increase its efficiency. Thus, if something is known about how an attribute is likely to vary by location, then this information can be used to stratify the area. Random sampling then occurs within each stratum to decrease the variance. With decreased variance, fewer observations are necessary to attain a specified level of confidence. Sampling should not be carried out at uniform spatial intervals, since the results will be very misleading if the feature being evaluated has a characteristic spacing.
The overall distribution and the particular sites that are to be monitored depends on the nature of the question being asked. Once again, implementation monitoring will use a sampling procedure that is keyed to the distribution of the activities being policed, and compliance monitoring will often use a distribution set by the regulations that established the standards of compliance. Effectiveness monitoring and monitoring for hypothesis testing will again use a strategy that is designed to most efficiently provide the specific information that is needed. These will not follow the same protocol everywhere, since the relevant questions differ slightly everywhere.
Often there is a trade-off between
monitoring a few sites in depth or monitoring many at a more cursory
level. Again, the appropriate approach depends on the nature of
the question. In some cases, the two strategies can be combined,
with a subset of sites monitored in more detail. Information from
the broader range of sites can then be used to define the context
for or scale up the detailed measurements.
Most monitoring involves observations of something that is expected to change through time. There are usually three parts to this problem: something triggers a change, several things propagate the change, and the thing of greatest interest responds to the change. For example, 1) a road crossing fails, 2) the stream transports the sediment downstream, pools fill in, and predation increases, and 3) trout populations decline. Any of these steps in the chain of causality can be selected for monitoring, but which is most useful depends on the question being asked and on what other influences affect each of the steps. For convenience, we will refer to the attributes describing the first step as "driving variables", those describing the second as "intermediate variables", and those describing the third as "target variables". These terms, of course, are defined only relative to the interest of the investigator: if the investigator were interested in channel morphology instead of fish, then pool filling would be the target variable. "Index variables" will be used to mean attributes that are measured as surrogates for the target variables.
In the case above, if the objective of the monitoring program is to prevent impacts to trout populations by allowing early intervention, then the occurrence of crossing failures must be monitored. Information about subsequent steps would be available too late to allow effective intervention. On the other hand, if the objective of the monitoring is to document variations in trout populations, the trout population would be the useful variable for monitoring. As a third possibility, if the intent is to provide information needed to correlate the extent of pool filling with trout populations, then both pool filling and trout populations would be measured. Note that this last option does not allow the cause of the correlation to be understood: the effect may be due to increased predation, to altered temperature, to altered food resources, or to a variety of other influences acting in combination. To untangle the cause-and-effect relations would require monitoring of many different attributes in a series of controlled experiments. Similarly, it is not possible to infer a cause-and-effect relation between trout populations and crossing failures simply by monitoring these two attributes; information about a variety of other influences would also need to be known.
Many attributes of particular interest are difficult to measure (e.g. evapotranspiration), or have high variance due to the variety of factors that influence them (e.g. salmon populations), or are integrative concepts that do not necessarily have direct physical manifestations (e.g. ecosystem health). In each case, people try to define index variables that allow the response of the attribute of interest--the target variable, by our definition--to be estimated. In particular, there is a widespread drive to find a suite of variables to indicate whether the system is OK, and if not, how broken it is. This effort has resulted in adoption of ERAs for indexing cumulative watershed effects, woody debris loading and pool spacing (among others) for indexing salmonid habitat quality, and percent change in maximum turbidity for indexing water quality. Further, the perceived value of regional score-cards has created a drive for consistency in the use of index variables; agencies want to be able to objectively compare the health of the Tuolumne River with that of the Snoqualmie.
Index variables can be useful, but only if their relation to the driving variables and target variables is known. Because these relations vary from site to site, region to region, and through time, no index will be suitable everywhere. Any index variable will indicate only a few types of impacts, and its responses to those impacts will not uniformly be the same as those of the target variable. One of the best-known indicator variables is the health of canaries in coal mines. In this case, the driving variable is toxic fumes, and the target variable is human health. The system only worked because canaries are sensitive to the driving variable of interest, and the system worked only for that particular variable. Canaries would not have been an appropriate indicator for infectious diseases, for example, or for food poisoning from mess-hall meals.
Monitoring objectives also must be considered in the selection of index variables. In the coal-mine case, the monitoring objective is to provide early warning to avert disaster. The system only worked because there was a lag between canary death and human death. Canary health would not have been an appropriate index variable as an early warning for cave-ins even though both canary health and human health respond in the same way to this driving variable.
Index species are sometimes selected to represent the overall "health" of a community. Selection of an appropriate index species depends heavily on an understanding of the types of changes that are expected in the system, the way that those changes are likely to come about, the likely response of the entire suite of species to those changes, and the likely timing of those responses. Only with this information can a species be identified that will show an interpretable response over the time-scale necessary. For example, monitoring of coho salmon populations will not provide as sensitive an indicator of habitat change as will measurement of resident trout populations. It may be even more useful to monitor a suite of indicator species with different anticipated responses to the expected change. Such an approach would allow extraneous influences to be more easily detected and interpreted and would provide information about a wider variety of possible impacts.
Even if the response of a single species is to be monitored, particular attributes of that response must be selected, and these also represent index variables. The same types of considerations therefore must go into selection of an appropriate attribute. Thus the number of out-migrating smolt may not be useful as an indicator of the future population without additional information on the size of the individuals (larger fish have higher ocean survival) and the timing of the out-migration (early out-migration results in higher loss to predators).
A variety of biological indices have been developed that rest not on the response of a particular organism, but on a composite of measurements that describe aspects of biological communities. In fact, communities can be monitored only through the use of index variables. These descriptors include various indices of biodiversity, such as species richness, relative abundance, and a variety of more complicated combinations of measurements. None of these can be used in the absence of an understanding both of how the index relates to the monitoring objective, and of how the index is influenced by the physical and biological processes that affect the site of interest. For example, there can be important biological impacts to communities even if driving variables are altered by sub-lethal amounts. A small change in temperature can alter the relative abundance of juvenile salmonids and non-salmonids in a stream, even though both types remain present. Thus, species richness is useless as an index for providing early warning, since richness usually does not change much until the system falls apart. In this case, monitoring of relative abundance may provide a more sensitive warning.
Biodiversity indices are particularly difficult to interpret unless the system is very well understood. Part of this difficulty arises from the scale dependence of biodiversity. On a small scale, low species richness may be an important characteristic of a sustainable community. Late-seral redwood forests, for example, are known for having low biodiversity, and an increase in species richness at such sites is an indication that the system is being degraded. Species richness may be high in natural systems at a larger scale, however, because the larger system is made up of patches of distinctive, low-richness communities. At this scale, human disturbance may result in decreased species richness because the communities are homogenized by management, even though the altered community is "richer" than many of the original components were. At this scale, decreased species richness is an indication of impact. At the scale of a continent, the interpretation is again reversed. There have been more species introductions to North America since European settlement than there have been extinctions, so the overall "biodiversity" of the continent has increased at the same time that biodiversity is decreasing at the global scale.
But despite the practical and philosophical
difficulties, there remains a very real need to "watch or
check on" the status of entire ecosystems, and this cannot
be done simply by monitoring various biological and physical components
of that system. This can only be done by developing a sophisticated
understanding of how particular ecosystems "work" over
different scales of time and space: what are the interactions
and processes that maintain the system, and how are these distributed
through space and time? Only through this type of understanding
are we freed of the anthropocentric standard of compliance. Without
this understanding we consider an undesirable ecosystem condition
to be one that has shifted away from the values we think are important.
In the long term, however, such changes may be essential to the
maintenance of the system. At this point, we cannot presume to
know what conditions a sustainable ecosystem should assume through
time. We can only know that the types of changes wrought by recent
human intervention are probably not appropriate for maintaining
the suite of ecosystem conditions that pertained before the interventions
took place. Looked at from this point of view, the best index
variables for ecosystem distress may be distribution, type, and
intensity of recent human activities.
Habitat variables as index variables for biological attributes
Because populations show wide fluctuations through time and often respond to influences other than the driving variables of primary interest, habitat variables (e.g. pool depths and woody debris frequency) are often selected for monitoring in place of target variables (e.g. salmonid populations). Habitat variables are also commonly used as indices for communities, since biological communities are strongly influenced by their physical and biological setting. Interest in habitat variables also arises from agency culture: in the past, the Forest Service was given responsibility for wildlife habitats, while the US Fish and Wildlife Service had responsibility for the animals that inhabit them. The Forest Service has thus concentrated on habitat monitoring, while the USFWS has put more effort into population monitoring.
Unfortunately, most of the variables in common use are useless for most monitoring objectives. Changes in channel morphology (e.g. width-depth ratio, pool spacing, pool depth, bank morphology) cannot be used either for providing early warning or for improving management methods because their response lags far behind the driving variables. By the time a response is detected, the system is not repairable, and the connection between the response and the particular practices that caused the response are no longer visible.
In addition, changes in the intermediate habitat variables cannot be used to infer either the identity of the driving variable or the response of the target variable. Too little is known about how a change in bank stability actually affects in-stream communities, for example. If correlations between population change and habitat change are strong, then the habitat variables may tentatively be used as indices. The practice, however, is dangerous until the reason for the correlation is understood. As soon as the reasons are known, the information can be used to alter monitoring programs to increase their efficiency. Thus, population change may be correlated to changes in bank stability simply because both variables are responding to different mechanisms triggered by the same driving variables, or because erosion rates are increasing at the same time that fishing intensity is increasing.
Monitoring becomes less cumbersome and more useful as the reasons for change are increasingly understood. Understanding allows the selection of more useful attributes for monitoring and increased sophistication in the interpretation of monitoring results. It may thus be useful to preface a long-term monitoring program with a series of short-term studies to improve the understanding of a particular system.
For terrestrial species, habitat needs are relatively unknown, so monitoring of habitat variables is useful primarily to develop correlations. Even salmonid habitat needs are not fully understood. Most of the habitat information available describes use during the daylight hours of summer, yet the habitat stress caused by winter storms may be more important than summer conditions. It thus has been difficult to establish the link between habitat characteristics and biological attributes, and until this link is made, use of habitat as a surrogate may provide results that are difficult to interpret. Monitoring of interrelations between habitat and population is required before habitat can be usefully used as a surrogate.
Even more difficult to interpret
are the results of monitoring studies of macroinvertebrates because
the linkage between the index and the target variable is even
less well understood. In this case, aquatic macroinvertebrates
are often used as a surrogate for habitat conditions, which are,
in turn, an implicit index for populations. There has often been
a tendency to adopt surrogate variables about which little is
known, since the less is known, the less complicated they seem.
This tendency is partly responsible for fisheries biologists'
use of inappropriate geomorphological indices such as bank stability
and channel width-depth ratio. It should be evident that interdisciplinary
communication is essential before specialists from one discipline
adopt a monitoring scheme that depends on an understanding of
a different discipline.
Planning the details
Oddly, much more of the monitoring literature is devoted to describing how to monitor something than to discussing why it should be monitored or if it should be monitored at all. Lots is known about monitoring tactics; not much is known about monitoring strategy. Tactics thus will not be considered here. That being said, it is still necessary to address the strategy of planning tactics.
Once an overall strategy for monitoring has been selected that identifies the types of attributes that might be useful and the appropriate locations and timing for monitoring, one has a choice of the level of precision to strive for. There is usually a strong correlation between the precision of monitoring results and the effort required to produce them. High-precision studies are usually costly, require strong technical support, and can provide information only from a few sites. In many cases, the level of precision they provide is not justified by the application for which they are intended, and the monitoring objectives could have been as well met by less precise results. In almost as many cases, objectives could have been better met by much less precise results at many more sites.
An appropriate level of precision
can be selected by revisiting the question of how the data are
to be used to identify the minimum precision necessary to satisfy
the monitoring objectives. The study's statistician can describe
the relative strengths of different sampling plans, and can explicitly
illustrate the trade-offs between the number of samples at a site
and the number of sites sampled. Selection of a measuring method
is then the last step: a method is selected that provides the
necessary precision with the least effort.
Interpreting monitoring results
Monitoring results are only as useful as the inferences that can be drawn from them; if results cannot be interpreted, they are useless. The strength of any inference must be evaluated statistically, so at this point the pay-off for the conscientious inclusion of statistical expertise in project planning becomes clear. There is a common misconception that high levels of statistical confidence are required only for research; in the words of one National Forest Systems hydrologist, "For land management purposes, a 60% confidence level is good enough." But looked at from the point of view that a 60% confidence level means that there is a 40% chance that the results are wrong, the fallacy of the statement becomes clear. If the public knew that there was a 40% chance that the management practices used by the agencies are destroying public resources, the public would not have much confidence in the agencies' ability to manage land. Or looked at another way, if you knew that there was a 40% chance that the plane you are about to board is going to crash, would you step on the plane?
Statistical tests indicate whether or not the monitoring results show non-random patterns, but the interpretation of those patterns is an entirely different issue. Unless the reason for the patterns is understood, the results can be described but not interpreted. If index variables were used, interpretation is even more difficult. Not only must the link between the driving variable and the index variable be understood, but so must the link between the index variable and the target variable. Thus, simply knowing with a 95% confidence level that the habitat changed doesn't necessarily mean that there was any effect at all on fish populations.
Even if the target variable is the attribute monitored, interpretation may still be difficult. In a larger context, for example, is it bad if some populations decrease? To understand the relevance of a measured population decline at a site, it is necessary to know something about why the population declined; how populations behaved under natural conditions; how neighboring populations are behaving; and how rapidly populations can recover. If a local decline is due to a natural disturbance that is itself necessary for maintaining the necessary habitat (e.g. a wildfire), then the decline is a necessary phase in maintaining the long-term sustainability of the population.
Finally, monitoring results are
useless unless they are used. Into every monitoring plan should
be built a procedure for fulfilling the monitoring objectives.
If the monitoring is to lead to an improvement of management practices,
then there must be a mechanism for translating the results into
a form that allows the improvements to be planned. In some case,
this threat of effective change is itself a deterrent to communication.
If an agency suspects that results may show that past practices
were inadequate, then there is often an unspoken desire that those
results not be communicated. This response has been evident in
the agencies' reluctance to monitor the effectiveness of habitat
Federal land managements agencies
have promised to carry out monitoring and strategies and methods
exist to accomplish the intended monitoring. Now come the logistic
challenges. In reality, there is never enough money, time, and
staff available to carry out the ideal program, whatever the ideal
program is. To make monitoring in support of the Northwest Forest
Plan work as intended, it will be necessary to be very selective
in the monitoring programs chosen for implementation. We need
a strategy for selecting the appropriate monitoring work and for
getting the work done.
Prioritization of monitoring tasks
Watershed analysis teams have the task of recommending priorities for monitoring at the watershed scale, but this is not the relevant scale for implementing the overall monitoring effort. The overall program must be shaped by the overall strategic goal of making the Northwest Forest Plan work. These decisions must be made at an interagency level. Part of this goal, of course, is to provide the local feedback necessary to support land-management decisions at the local and watershed scale. But if these local needs are considered in the context of the broader needs, then monitoring programs may be designed to serve both scales of need at the same time.
The first step in designing such an approach is for the agencies to together identify their information needs and the uses for the information, and this is the intended role of the Interagency Research and Monitoring Committee (ROD p. E-12). These needs can then be prioritized by their importance for meeting ROD goals and by the level of return the information will provide. The ROD specifies that not all monitoring will be done everywhere: "The level and intensity of monitoring will vary, depending on the sensitivity of the resource or area and the scope of the management activity" (ROD p. E-2). The overall strategy for site selection and monitoring intensity thus can be determined by the overall needs.
Any plan for monitoring has to be evaluated in part by its cost. Unfortunately, there is not much information available about how much it costs to monitor what attributes using what methods. It would be useful for this type of information to be compiled so that relative costs are easily compared while programs are being planned. It is likely that such a compilation will quickly show that much of the necessary information is not practical to obtain by monitoring. In such cases, efforts can be put into alternative means of acquiring the information; research studies might be funded to develop predictive tools, for example, or retrospective studies based on aerial photographs or existing data might be used.
Any overall monitoring strategy will need to address both short- and long-term needs. In the past, short-term needs have received more attention than long-term ones because today's political alligators bite more convincingly than tomorrow's. Many of the questions faced by land management agencies can only be answered through long-term studies. It will also be necessary to incorporate into any monitoring plan an appreciation for the importance of rare events. Intensified monitoring after such events will be needed to better understand their role in maintaining ecosystems. Money for monitoring the effects of such events might be included as a part of emergency budgets for dealing with fires and storm damage, since the agencies already accept the need for spending in time of disasters.
It is important to recognize the
role that agency culture has played in producing such a record
of unenlightened monitoring work. In the first place, the funding
structure within agencies does not facilitate the development
of techniques. Development requires a certain amount of failure,
and agencies are not willing to fund the trial and error that
goes into achieving a useful product. Instead, workers are expected
to just go out and do it, and they are considered incompetent
if they admit that they don't really know how. There then develops
a need to establish the validity of the methods used, rather than
to test their validity; agencies are not comfortable with questioning
the validity of methods they have already adopted. Because we
thus don't know how poorly the methods work, there is little motivation
for changing them. In addition, the legislative context for federal
land management has generated a reverence for consistency in the
application of methods. Whether a method is valid or whether it
represents the best available approach to addressing a particular
issue is often considered secondary to the perceived need to do
something in the same way everywhere. These cultural barriers
must all be confronted and overcome if a useful monitoring program
is to be implemented.
The role of watershed analysis in planning monitoring projects
The ROD specifies that "monitoring will be guided by the results of watershed analysis" (ROD p. B-32). Watershed analysis is to provide the basic understanding of what issues are important in an area and of the types of processes and interactions that influence those issues. The analysis will identify the most important of these, and will discuss their spatial and temporal distributions. In essence, the analysis provides the background information that is necessary to identify the issues for which monitoring information will be useful; to identify the attributes that are likely to be sensitive to the changes of concern; and to determine the locations, timing, and nature of likely changes. Because the context for monitoring needs will be understood, there will be less likelihood of producing uninterpretable results or results no one is interested in. The relevance of the intended monitoring work will be immediately apparent, and this should lead to increased agency commitment to completing the monitoring programs.
Watershed analysis also reveals what is not known about a system, and prioritizes the types of information needed to better understand the system. This information, too, can guide monitoring by revealing the questions toward which research will be most usefully directed. In addition, a lot of existing information is brought to light by watershed analysis. Such information can provide the basis for follow-up comparisons, or can provide information that would otherwise need to be gathered through monitoring.
The ROD specifies that different types of monitoring will be done at different intensities in different areas, according to what is needed in each area (ROD p. E-2). To a large extent, the watershed analysis defines what is needed and provides the information needed to design an effective monitoring strategy; "specific monitoring objectives will be derived from results of the watershed analysis" (ROD p. B-32). By showing the temporal scales over which change occurs, the analysis can be used to identify the useful duration and periodicity of monitoring. By showing the spatial scale and location for likely changes, the necessary spatial distribution of monitoring sites can be planned. This exercise will also narrow the list of potential attributes for monitoring and will disclose the attributes for which approaches other than monitoring must be taken. Thus if landsliding is driven by rare storms, then a yearly observation of landslide frequency is essentially useless.
Of particular importance is the
fact that the interdisciplinary watershed analysis team is supposed
to develop recommendations for monitoring, and not mono-disciplinary
individuals. Recommended priorities for monitoring will thus be
identified through an understanding of the entire system and will
not simply reflect the desires of individual disciplines. Monitoring
projects will thus be more likely to serve a general need, rather
than to serve themselves.
As with watershed analysis, everyone has a different expectation for what monitoring will be done and what type of information it will provide. Many of the uncertainties about watershed analysis implementation or management options are passed off to "we'll know what to do as soon as the monitoring program gets rolling". Like GIS, monitoring is being used as a catch-all future solution to today's problems: monitoring is to satisfy the survey-and-manage requirements for the species listed in the ROD; monitoring is to satisfy oversight requirements; monitoring will provide the jobs needed to maintain rural communities; monitoring is to demonstrate that the AMA experiments are successful so that new practices can be adopted everywhere within a few years. And someone else is going to pay for the monitoring. As long as these expectations are maintained, any monitoring program is going to be perceived by most as a failure. It will thus be important to educate all involved about the real capabilities, responsibilities, and limitations for monitoring.
In the first place, everyone must realize that successful monitoring requires a high level of interagency commitment. Research and regulatory agencies must be involved in planning monitoring projects so that results will be credible, and management agencies must commit the staff and resources needed to see that the selected projects are properly carried out. There will not be a large sum of money freed up to make monitoring happen; it was considered a necessary component of normal business even before FEMAT, and funding for the earlier programs will now be applied to Forest Plan monitoring (ROD p. E-2). In addition, monitoring cannot be carried out as a haphazard collection of unrelated projects. There must be some overall strategy for the program so that results can be used effectively.
Second, it is going to take a very long time before some of the most sought-after results are going to be available. Many results of adaptive management experiments will become apparent only after a complete cycle of logging and regrowth has been completed; others may require the occurrence of a large storm or wildfire before they become visible. Thus, an adaptive management experiment may not be deemed "successful" until 80 years have passed. The expectation that the practices put on the ground as experiments now will be validated and in common use within a few years is unrealistic. Any monitoring results over a short term must describe their range of applicability by describing explicitly what types of events and system stresses occurred during the measurement period. The size and frequency of these events can then be examined in their long-term context to determine whether or not the system has actually been tested yet. An interagency commitment to monitoring can sometimes ensure that long-term programs are maintained: such a partnership gives "peer" pressure for each of the partners to continue their promised support. Clear articulation both of the goals of a monitoring project and of the specific way that results will be applied is also useful for building long-term commitment.
The ROD's expectations may also
need to be managed. In some cases, accumulating experience may
show that the ROD's requirements are unrealistic. If the plan
is attempted as written and aspects are found to be unworkable,
then an argument must be made to demonstrate the need for altering
the ROD's provisions. The argument is published in the Federal
Register, and permission is sought from Judge Dwyer's court [9th
Circuit Court of Appeals]. If permission is granted, the ROD provision
may be altered. This route, however, is a last resort. The ROD
provisions exist because the fundamental precept of the Northwest
Forest Plan is that appropriate forest management is management
based on an understanding of the forest ecosystem. Monitoring
provides one very effective way of attaining the information needed
for enlightened forest management.
Ken Roby and Gordie Reeves provided
thoughtful commentary on monitoring strategies and limitations.
Gruell, G.E. 1983. Fire and vegetative trends in the Northern Rockies: interpretations from 1871-1982 photographs. USDA Forest Service General Technical Report INT-158. 117 pp.
MacDonald, L.H.; Smart, A.W.; Wissmar, R.C. 1991. Monitoring guidelines to evaluate effects of forestry activities on streams in the Pacific Northwest and Alaska. EPA/910/9-91-001. U.S. Environmental Protection Agency, Water Division.
United States Department of Agriculture and United States Department of the Interior. 1994a. Record of Decision for amendments to Forest Service and Bureau of Land Management planning documents within the range of the northern spotted owl; Standards and Guidelines for management of habitat for late-successional and old-growth forest related species within the range of the northern spotted owl. U.S. Government Printing Office 1994 - 589-111/00001 Region no. 10.
United States Department of Agriculture and United States Department of the Interior. 1994b. Environmental assessment for the implementation of interim strategies for managing anadromous fish-producing watersheds in eastern Oregon and Washington, Idaho, and portions of California.
The Record of Decision specifies
that monitoring will be an important aspect of future land management
on federal wildlands in the west. Watershed analysis is intended
to provide guidance for designing appropriate monitoring strategies.
To design appropriate analyses, we must understand what types
of monitoring are needed and what types of information are required
to design monitoring plans.
10:00 - Logistics - Mike Furniss, Six Rivers N.F.
10:10 - The FEMAT view of monitoring - Gordie Reeves, PNW
10:25 - What the ROD requires - Bob Ziemer, PSW
10:35 - Agency requirements, commitments, and approaches to monitoring (5-minute presentations)
California Department of Forestry and Fire Protection - Andrea Tuttle, CDF
Redwood National Park - Mary Ann Madej, National Biological Service
North Coast Regional Water Quality Control Board - Frank Reichmuth, North Coast Water Board
US Fish and Wildlife Service - Nat
10:55 - Other perspectives on monitoring
11:15 - Monitoring strategies (5-minute presentations)
Implementation monitoring and BMPEP - Mike Furniss, Six Rivers N.F.
Physical indices - Leslie Reid, PSW
Biological indices - Gordie Reeves, PNW
11:15 - Discussion: What's the most successful monitoring program you've seen?
11:55 - Prioritization of topics
for afternoon discussion
1. How can you monitor an ecosystem? (14*)
2. What types of information can watershed analysis provide that would be useful for designing monitoring plans? (13*)
3. To what extent can habitat monitoring stand in for population monitoring? (13*)
4. How should we decide what to monitor? (13*)
write-in 1: How can we handle the disconnect between the political need for rapid results and the long time required to discern changes or evaluate adapative management experiments? (11*)
write-in 2: What is the difference between monitoring, inventory, and research? (10*)
5. Are index species a useful concept in monitoring? (6)
6. How can long-term agency commitment to monitoring be facilitated? (5)
7. How precise is precise enough, and when can we stop? (4)
8. Under what conditions is consistency necessary? How much consistency is enough, and how do we evaluate it? (3)
9. What types of information are needed to design a monitoring plan? (0)
|Mary Arey||Watershed Research and Training Center, Hayfork|
|Bob Barnum||Barnum Timber Company, Eureka|
|Trinda Bedrossian||California Division of Mines and Geology|
|Dave Best||Redwood National Park, Arcata|
|Clay Brandow||California Department of Forestry; Sacramento, CA|
|Bill Brock||US Fish & Wildlife Service - Weaverville|
|John Brooks||Six Rivers National Forest, Stonyford|
|Melissa Bukosky||Redwood Community Action Agency|
|Cal Conklin||Klamath National Forest, Yreka|
|Carolyn Cook||Forest Service, Six Rivers NF - Eureka|
|Brenda Devlin||Forest Service - Gasquet|
|Darla Elswick||HSU Rivers Institute - Arcata|
|Bob Faust||Mendocino National Forest, Willows|
|Fred Fischer||Six Rivers National Forest, Eureka|
|Robert Franklin||Hoopa Valley Tribe Fisheries Dept, Hoopa|
|David Fuller||Bureau of Land Management - Arcata|
|Mike Furniss||Six Rivers National Forest, Eureka|
|John Grunbaum||Klamath National Forest, Happy Camp|
|Jim Harvey||Mendocino National Forest, Upper Lake|
|Steve Hubbard||California Division of Forestry - Fortuna|
|Lamont Jackson||Six Rivers National Forest, Willow Creek|
|Karen Kenfield||Six Rivers National Forest, Eureka|
|Deborah Konnoff||Forest Service Region 6, Portland|
|Dave Lamphear||PSW Redwood Sciences Lab - Arcata|
|Gaylon Lee||State Water Resources Control Board - Sacramento|
|Bill Lydgate||HSU Rivers Institute - Arcata|
|Mary Ann Madej||National Biological Service, Arcata|
|Sungnome Madrone||Redwood Community Action Agency, Eureka|
|Pat Manley||Forest Service Region 5, San Francisco|
|Mike McCain||Forest Service - Gasquet|
|John McKeon||Affiliated Research, Eureka|
|Eddie Mendes||Barnum Timber Company, Eureka|
|Lee Morgan||Mendocino National Forest, Willows|
|Mike Napolitano||Balance Hydrologics, Concord|
|Vicki Ozaki||Redwood National Park; Arcata, CA|
|Doug Parkinson||Fisheries Consultant, Arcata|
|C.J. Ralph||PSW Redwood Sciences Lab - Arcata|
|Gordon Reeves||Pacific Northwest Research Station, Corvallis|
|Frank Reichmuth||N. Coast Reg. Water Qual. Control Brd. - Santa Rosa|
|Leslie Reid||PSW Redwood Sciences Lab - Arcata|
|Rema Sadak||Six Rivers National Forest - Eureka|
|Lucy Salazar||Six Rivers National Forest, Eureka|
|Greg Schmidt||Six Rivers National Forest, Eureka|
|Kristin Schmidt||Six Rivers National Forest, Eureka|
|Mark Smith||Six Rivers National Forest; Eureka, CA|
|Bill Trush||Humboldt State University, Arcata|
|Andrea Tuttle||California Dept. of Forestry/State Board of Forestry|
|Katherine Worn||Humboldt Interagency Watershed Analysis Center, McKinleyville|
|Bob Ziemer||PSW Redwood Sciences Lab - Arcata|