Resource monitoring is becoming an increasingly important subject. Management of public lands has become increasingly complex. Increasing demand for resources, greater public involvement in management, issues of species population viability, and ecosystem have all contributed to a need for a better understanding of the land, and how it changes over time. In order to assure the public that the management practices have acceptable effects on the ecosystems involved, monitoring is necessary. This helps to ensure that the actual results are within the expected range of effects. If not, then adaptive management decisions can be made to improve the situation.
Ecosystems change over time, with or without human influence. Examples of natural agents of change include climatic fluctuations, wildfire, and windthrow. Human induced changes result from acid deposition, introduction of exotic species, and alteration of the landscape through logging, farming, and other development. Sound management of ecosystems depends on an ability to understand the effects of natural and humancaused change. Repeated observations over time, properly designed, can separate natural effects from human ones, and distinguish effective management practices from less effective or harmful ones. Clearly, the ability to gather this type of information is at the core of land stewardship and ecosystem management.
Resource managers have often seen monitoring efforts fail and are reluctant to enter into new monitoring efforts as a result. Data were collected, but no results were produced. This is often due to a lack of clearly defined objectives (Noss and Cooperrider 1994). It is common for limited funds to have been spent on monitoring efforts with few meaningful results. Carefully defining objectives, and then carefully matching methods to meet them, can mean the difference between an effective monitoring program and a waste of time and money.
Ecosystem monitoring is not a fully developed science. Much is not yet known about what the best ecosystem indicators are; what the most cost effective sampling and plot designs are; and, how to analyze the results to provide concrete information upon which to base management decisions. However, research is being conducted on these problems, and experience is being accumulated. Monitoring for management purposes provides an opportunity for the scientist to collaborate with the resource manager to develop an effective monitoring system.
Definition of Monitoring
As noted in the science paper, the term monitoring is used to describe many similar activities. The one element that distinguishes between these other measuring activities and monitoring as used in this paper is that monitoring has the objective of creating data which are to be compared to an explicit standard. Other measuring activities that may, in fact, measure the same things, have different objectives. For example, an inplace inventory has the purpose of counting what is present. No particular standard of comparison is made. A survey is similar to an inventory in this regard. Without trying to define these other terms (and thus avoiding all sorts of arguments), one can see the difference between monitoring and the other measurements and the similarities.
That standard to which the monitoring is compared may be standards or guidelines from a land use plan; desired future conditions; historical conditions such as presettlement times; baseline information for a given year; or even a range of conditions or years. The standard may be for a specific site or it may be for an entire watershed or even larger piece of land such as a physiographic province or a continent. It may be for specific kinds of land or water or it may be for all lands in a specific area.
Monitoring is the measurement through time that indicates the movement toward the objective or away from it. Monitoring will provide information about the status and trends of resources or ecosystems, but it should not be used to determine cause and effect; that is more purely suited to a research study.
Some key points are that the measurements and evaluation are completed more than once over time, that monitoring is done for a specific purpose--to check on the process or object or to evaluate the condition or the progress toward a management objective--and that the results will effect an action of some kind, even if the action is to maintain the current management
Need for Monitoring and Evaluation
Evaluation and monitoring go hand in hand. Monitoring provides the raw data to answer questions. But in and of itself, it is a useless and expensive exercise. Evaluation is putting those data to use and thus giving them value. Evaluation is where the learning occurs, questions answered, recommendations made, and improvements suggested. Yet without monitoring, evaluation would have no foundation, have no raw material to work with, and be limited to the realm of speculation. As the old song says, "you can't have one without the other." A monitoring program should not be designed without clearly knowing how the data and information will be evaluated and put to use. We can not afford to collect and store data that are not used. Monitoring for monitoring's sake is monitoring that should never be done.
The nation (indeed, the world) needs to know the status of its soil, water, air, plants, and animals as well as how these are changing over space and time. An assured supply of critical natural resources, healthful environmental conditions, and the capability to predict, understand, and resolve environmental problems is an issue of utmost importance. Monitoring programs are needed to support a comprehensive, scientificallybased evaluation of the present and future condition of the environment and it ability to sustain present and future populations.
Monitoring and evaluation are essential components for taking an ecological approach to management of natural resources. Since there is much that we do not know about ecosystems and how our management actions will affect them, we must learn as we go. A report by the Ecological Society of America (1996) suggests that management approaches be viewed as hypothetical means to achieve clearly stated operational goals. In testing these hypotheses, monitoring programs should provide critical and timely feedback to managers (see chapter on adaptive management, MT27). Within the framework of ecosystem management, monitoring programs should be designed to determine whether management actions are moving the ecosystem toward desired future conditions and trajectories, i.e., toward goals and expectations. Monitoring is thus a means of checking on progress as well as a tool for improvement. Without it, there is no way of knowing if our management actions are working and how they should be changed to be more effective.
Managers need to understand that the design, development, and maintenance of monitoring and evaluation programs requires commitment and longterm vision. In the short term, monitoring and evaluation often represents an additional cost and is particularly difficult to maintain when budgets are tight and where personnel are temporary or insufficient. Yet we must be clear that lack of consistent support for longterm monitoring and evaluation will hinder progressive ecosystem management.
Mandates for Monitoring and Evaluation
Besides the need to do it as part and parcel of sound ecosystem management, Congress has mandated, either implicitly or explicitly, various types and levels of monitoring and evaluation in numerous pieces of legislation. A sample of these statutes (with an admittedly Forest Service perspective) gives an indication of the breadth of this arena. Note that many of these deal with inventories, which, when repeated over time, can be viewed as a type of monitoring (see chapter from MT30).
1. Organic Administration Act of 1897 (Ch. 2, 30 Stat. 11, as amended; 16 U.S.C. 473475, 477482, 551). Section 24, which established the National Forests, included provisions for the inventory and management of these lands.
2. Fish and Wildlife Coordination Act of 1934 (ch. 55, 48 Stat. 401, as amended; 16 U.S.C. 661, 662(a), 662(h), 663(c), 663(f)). This act authorizes surveys and investigations of the wildlife of the public domain lands including lands and waters of interest therein acquired or controlled by any agency of the United States.
3. Bankhead Jones Farm Tenant Act of 1937 (Ch. 517, 50 Stat. 522, as amended; 7 U.S.C. 10101012; 16 U.S.C. 551). In Section 32(e) of this act, the Secretary of Agriculture is authorized to "...conduct surveys and investigations relating to conditions and factors affecting, and the methods of accomplishing most effectively the purposes of this title, and to disseminate information concerning these activities."
4. Clean Water Act of 1948 (P.L. 80845; 62 Stat. 1155, as amended; P.L. 1004 as amended; 33 U.S.C. 1251, 1254, 1323, 1324, 1329, 1342, 1344) The Act seeks to control and eliminate most point and nonpoint sources of water pollution in the United States. Provision for compliance with standards under the Act and certain State and local requirements by federal agencies. A permit system is used for point sources and State or area wide plans for nonpoint sources. The Federal and private agencies within states are required to meet state established standards, which requires measuring water parameters.
5. Clean Air Act of 1955 (P.L. 84159, 69 Stat. 322, as amended; P.L. 9595, 91 Stat. 685, as amended; 42 U.S.C. 7401, 7403, 7410, 7416, 7418, 7470, 7472, 7474, 7475, 7491, 7506, 7602). Provides an affirmative responsibility to federal land managers to protect "air quality related values" in Class I areas (international Parks, wilderness areas, national memorial parks and national parks). 42 USC 7472, 7475. Sections 162 and 165 require a classification of Federal lands for air quality monitoring.
6. Wilderness Act of 1964 (P.L. 88577, 78 Stat. 890; 16 U.S.C. 1121 (note), 11311136). Section 3 permits the gathering of resource information in wilderness areas.
7. Soil Surveys for Resource Planning and Development Act of 1966 (P.L. 89560). Clarified the legal authority for the Soil Survey Program of the United States Department of Agriculture by specifying: that the soil surveys are needed by "...states and other public agencies in connection with community planning and resource development for protecting and improving the quality of the environment, meeting recreational needs, conserving land and water resources, and controlling and reducing pollution from sediment and other pollutants in areas of rapidly changing uses..."
8. National Environmental Policy Act of 1969 (P.L. 91190, 83 Stat. 852; 42 U.S.C. 4321 (Note), 4321, 43314335, 43414347). Section 201 calls for annual reports on "the status and condition of the major natural, manmade, or altered environmental classes of the Nation, including...the air,...aquatic,...and terrestrial environments."
9. Endangered Species Act of 1973 (P.L. 93205, 87 Stat. 884, as amended; 16 U.S.C. 15311536, 15381540). Section 6 directs each Federal Agency to conduct biological assessments for the purpose of identifying any endangered or threatened species.
10. Forest and Rangeland Renewable Resources Planning Act of 1974 (P.L. 93378, 88 Stat. 476, as amended; 16 U.S.C. 1601 (Note), 16001614). Sections 37 and 12 require the Secretary of Agriculture to conduct inventories of present and potential renewable resources, utilize information and data available from other Federal, state, and private organizations, and avoid duplication and overlap of resource assessment and program planning efforts.
11. Federal Land Policy and Management Act of 1976 (P.L. 94579, 90 Stat. 2743, as amended; 43 U.S.C. 1701 (Note), 1701, 1702, 1712, 17141717, 1719, 1732b, 1740, 1744, 1745, 17511753, 1761, 17631771, 1781, 1782; 7 U.S.C. 1212a; 16 U.S.C. 478a, 1338a). This act requires that public lands and their resources be periodically and systematically inventoried and that an evaluation of the current natural resource use and values be made of adjacent public and nonpublic land.
12. National Forest Management Act of 1976 (P.L. 94588, 90 Stat. 2949, as amended; 16 U.S.C. 472a, 476, 500, 513516, 518, 521b, 528 (Note), 576b, 5942 (Note), 1600 (Note), 1601 (Note), 16001602, 1604, 1606, 16081614). Sections 2, 6(f)(3), and 6(g)(2) emphasize the stipulations of the Renewable Resources Planning Act of 1974. The act also requires that the Secretary of Agriculture to establish quantitative and qualitative standards and guidelines for land and resource planning and management. Inventories shall include quantitative data making possible the evaluation of diversity in terms of its prior and present condition.
13. Soil and Water Conservation Act of 1977 (P.L. 95192, 91 Stat. 1407; 16 U.S.C. 20012009). Section 5 authorizes the Federal Government to obtain and maintain information of the current status of soil, water, and related resources. The act further requires an integrated system capable of using combinations of resource data to determine the quality and capabilities for alternative uses of the resource base and to identify areas of local, State, and National concerns.
14. Forest and Rangeland Renewable Resources Research Act of 1978 (P.L. 95307, 92 Stat. 353, as amended; 16 U.S.C. 1600 (Note), 16411647). Replaced the earlier forestry research legislation and is the current Forest Service mandate for conducting broadscale resource inventories. In Section 3(a) of this act, the Secretary of Agriculture is authorized "...to obtain, analyze, develop, demonstrate, and disseminate scientific information about protecting, managing, and utilizing forest and rangeland renewable resources in rural, suburban, and urban areas." Authorizes determination of the cause of changes in the health and productivity of domestic forest ecosystems and to monitor and evaluate the effects of atmospheric pollutants on such ecosystems.
15. Cooperative Forestry Assistance Act of 1978 (P.L. 95313, 92 Stat. 365; 16 U.S.C. 2101 (Note)). Authorizes financial assistance to State Foresters, and private forestry and other organizations, to monitor forest health and protect the forest lands of the United States. Section 8(b)(1) authorize the Secretary of Agriculture to conduct surveys to detect and appraise insect infestations and disease conditions and manmade stresses affecting trees and establish a monitoring system throughout the forests of the United States to determine detrimental changes or improvements that occur over time, and report annually concerning such surveys and monitoring.
16. Public Rangelands Improvement Act of 1978 (P.L. 95514, 92 Stat. 1806; 43 U.S.C. 17521753, 19011908; 16 U.S.C. 1333(b)). Section 4 directs the Secretary of Agriculture to inventory and identify current public rangelands conditions and trends as part of the inventory process required by Section 201 (a) of the Federal Land and Management Act of 1976 (43 U.S.C. 1711) and to keep such inventories current.
17. Energy Security Act of 1980 (P.L. 96294, 94 Stat. 611; 42 U.S.C. 8801 (Note), 8854, 8855 Sec. 261). This act emphasizes the need for biomass information for energy projects.
18. Forest Ecosystems and Atmospheric Pollution Research Act of 1988 (P.L. 100521, 102 Stat 2601; 16 U.S.C. 1680 (Note)). Section 3 directs the Secretary of Agriculture to increase the frequency of forest inventories in matters that relate to atmospheric pollution and conduct such surveys as are necessary to monitor longterm trends in the health and productivity of domestic forest ecosystems.
In addition to such legislation passed by Congress and signed by the President, there is an accompanying body of literature, the Code of Federal Regulations (CFR), that interprets ambiguous statutory language and translates it into more definitive actions that Federal agencies must take to implement the laws. These regulations are promulgated and are legally binding. Executive orders (EO) from the President are another form of mandate for Federal agencies. Individual agencies may then develop their own set of instructions or directions as to how they will comply with the CFR and EOs (e.g., the Forest Service Manual and Handbook system). These additional mandates beyond formal legislation or orders are where managers can find specific language regarding monitoring and evaluation.
Need for Credibility and Flexibility
Anyone can produce data and try to impress people with them. But as managers of public lands, our duty and responsibility is to provide the citizens of the U.S. with the best information possible. Credibility with the public is essential. Monitoring data that are collected using the best scientific knowledge, have known precision, are of highest quality, and are as objective as possible will be viewed as most credible. This is a tall order to fill, yet provides a most worthy goal. Proper monitoring and evaluation are the way that public land managers can regain public trust that seems to have been lost in recent years in many areas.
As a general rule, monitoring programs should be based on accepted rigorous statistical sampling designs and pay particular attention to issues of precision and bias in data gathering ( Ecological Society of America 1996). However, one must admit that true replication of measurements is often impossible and in some cases sample sizes are necessarily small. Bias in data gathering is often unavoidable owing to patterns of ownership, accessibility of areas, or limited sample techniques. And it may be that the questions being asked of monitoring data require only a general sense of a resource's status for a small area and thus a curbside observation of the site may suffice. Managers need to use the correct science and technology for the questions to be answered. But as pointed out by Holling (1978) and Walters (1986), conditions that limit optimal monitoring are no excuse not to establish monitoring programs. Rather, they should be stated explicitly in monitoring documentation and reflected as qualifications in any conclusions regarding the effectiveness of management actions. Thus flexibility is permitted allowing the type and detail of monitoring to be tailored to the specific situation as long as the consequences are recognized and publicized.
Using the Adaptive Management Process
The process that one follows for monitoring and evaluation can be viewed as an adaptive management process. As stated in the Adaptive Management paper (MT27), a cycle of design, implement, monitor, and evaluate occurs repeatedly over time. In a sense, monitoring and evaluation can be seen as onehalf of the adaptive management cycle. In another sense, the four adaptive management components can be used to describe a monitoring and evaluation process, and this is the approach taken here. For our purpose, we will call the design phase "Setting Objectives;" the implement and monitor phases will be combined and covered under "Monitoring Approaches," Quality Control," "Data Management," and "Data Analysis;" and the evaluate phase will be "Evaluation and Decision Making."
This cycle can be shown as a continual loop (Fig. 1).
Fig. 1. /<----- MONITOR<----\ | /|\ \|/ | EVALUATE SET OBJECTIVES | /|\ \|/ | \-->MAKE DECISIONS-->/
An example would be to define the ingredients of a healthy environment so it can be recognized when you see it, such as a desired plant community. The desired components of the community are identified and can become objectives.
Monitoring gives repeated measurements over time to see if you are sustaining, moving toward or away from a healthy environment.
Evaluation answers questions such as do our monitoring results indicate that we are getting closer to a healthy environment? Do we need to change out objectives? Does our decision need modification; and does our monitoring process need fine tuning or change or is it all right? Under this approach, departures from expected conditions or other qualities are not treated as failures, but rather as new information. The new information leads to changes in management. Management changes could be mitigation, change of future actions, or revised goals, or some mix of these. In some cases, departures from meeting expectations may not lead to changes in management. In these particular cases, extenuating circumstances may be the accepted explanation.
Make the decisions to achieve the objectives. Adapting to that which we learned may be beyond what we normally call monitoring and evaluation, but it is a key reason for doing it in the first place.
The whole monitoring and evaluation process should be developed as one program. The components should not be developed as you go. With public involvement, wellthought out objectives, monitoring, and evaluation the stage is set for making the best possible decisions for sustainable ecosystems.
This process provides feedback on both the monitoring plan and on the original management plan. The assessment of the monitoring system itself must be performed to ensure that it is providing the appropriate kind of information at the right level of detail. If not, then the monitoring protocol must be modified and monitoring continued. If the management objectives were met, then no change is required. If not met, then either the management activities must be modified to meet the objectives, or the objectives themselves must be modified. As with any big undertaking, the first step in monitoring is to set the objectives.
Once the decision is made that monitoring is needed, then it is natural to want to first decide what attributes will be measured. This impulse must be resisted. Instead, an effective monitoring protocol must be developed by first setting the monitoring objectives.
A successful monitoring effort begins with clearly stating the purposes for which you are monitoring. As we have tried to emphasize, monitoring is a tool designed to yield specific information: the information needed to direct ecosystem management to achieve desired outcomes. This is the information needed to detect magnitude and duration of changes in conditions; information needed to give "early warning;" and some of the information used to formulate hypotheses as to the causes of these changes.
The primary purpose of monitoring is to collect information with which to assess and guide management decisions- (The Nature Conservancy). Thus, monitoring is an integral part of resource management. Good management decisions require good information. Too little information or the wrong set of observations can result in incorrect conclusions. Too much information results in wasted time and money. The amount and kind of information must be tailored to the management objectives.
Broad monitoring objectives must follow from the management objectives for the study area. However, some underlying monitoring objectives (The Nature Conservancy) are to:
Since it is impossible and unnecessary to monitor all the natural resources, ecological processes, and environmental stressors, monitoring objectives must be realistic and attainable (i.e., cost effective and be obtainable in a defined timeframe). Therefore, these objectives should focus on solving the problem or issue and be developed in such a way that decisions can be made pertaining to them. Objectives should not be broad, catchall statements. Objectives will often be phrased as questions to be answered or addressed during evaluation. They should be developed involving decisionmakers, users, and the general public, who indicate an interest.
A method of evaluation should be developed while setting the objective. If the objective cannot be analyzed or evaluated, managers cannot make wise, informed decisions.
While in the process of setting objectives, a reality check is required. Time, expertise, and money are limited. Thus, a realistic monitoring system must be developed. If the monitoring objectives cannot be met within these constraints, then the objectives must be modified. The monitoring planner must have some notion of the kinds of expertise that will be required in order to identify potential personnel to perform the work on the various steps. This will enable the planner to identify any constraints. Time may also be a constraint if the ecosystem is in immediate need of mitigation efforts. Similarly, the planner must determine costs and other administrative/logistic constraints, such as numbers of vehicles available, equipment needed, and time required to obtain official approval for the plan.
Here is a list of questions one should ask while developing a set of objectives:
There are many different types of monitoring (see "Monitoring Approaches" section for more detail). The objectives will be different for these various types. For example, the USDA Forest Service (1987) and the USDI Bureau of Land Management (through their individual Resource Management Plans) describe three types of monitoring based on the purposes of the monitoring. Three such purposes are recognized:
EXAMPLE: The Little North Fork Late Successional Reserve, located in the western portion of the Klamath Basin, northwestern California, has as the overarching goal "to provide critical habitat for the northern spotted owl." The significance of the role of fire led the District interdisciplinary team to make the implementation goal be "reduce the risk to future large scale disturbances while maintaining and promoting late-successional forest conditions." Standards and guidelines are given in the Klamath National Forest Land Management Plan. The key underlying assumption is, "if we build it, they will come." That is, "if the habitat is provided, certain Species will be stable and well-distributed." Accordingly, the following monitoring is proposed to address those information needs.
Implementation monitoring: This monitoring involves determining both the amount of activity and the compliance to the plan's standards. The question regarding amounts of activities is addressed for the entire area rather than for an individual activity. The question posed is, "if the forest plan identified proposed and/or probable activities, were these implemented?" This sort of monitoring will be done annually to determine if the planned projects and activities completed and then use that information to better interpret the "effectiveness" monitoring results.
Effectiveness monitoring: This monitoring evaluates both areas and individual projects and activities; to "assess their effectiveness in meeting desired results and meeting the purpose and need for which they were established." For example, for fuel reduction projects in this latesuccessional reserve management area, some questions asked are, "Did the project create or leave the desired results of reducing fuel conditions to a point of flame length less than four feet and rate of spread less than 20 chains per hour?" And, for thinning projects, "did the activities provide for species diversity, including hardwoods? Were desired fuel levels achieved following the thinning operations?"
So, the monitoring they choose consists of such things as, "Determine the percent of capable ground currently in late and old growth habitat characteristics. Assess the risk from a large disturbance." And "assess connectivity between late successional and old growth stands with the late successional reserve." These are monitored every 10 years. One time monitoring is proposed for individual projects such as determining the "effectiveness of prescribed burns in reducing wildfire effects and meeting resource objectives."
Validation monitoring: This monitoring determines if key assumptions made in the plan are valid. One validation monitoring question is, "does the demography data show favorable or stable northern spotted owl population growth in areas with habitat that has been improved to a condition similar to the desired conditions expressed in the plan for the Little North Fork area?"
To be meaningful and useful, monitoring objectives should: (1) state fully what the worker intends to accomplish, and (2) specify a recognizable end point so that progress or attainment can be determined. Thus, it is important that monitoring objectives be clearly stated and use unambiguous wording to ensure that there is no question about what is being measured or monitored. Good monitoring objective statements provide a yardstick against which progress can be evaluated and are often stated in quantitative terms.
EXAMPLE: The objective, "To monitor the relationship between black bullheads and channel catfish," fits none of the above guidelines. The intent of the monitoring effort is uncertain and there is no clearly defined way to measure progress. In contrast let's examine how monitoring objectives can be developed to address the different types of monitoring adapted from McCammon (???). In this example, managers wish to maintain suitable water temperature levels for trout populations by retaining 80 percent of existing shade over the streams. To accomplish that goal, they have developed a management prescription which calls for no timber harvesting within 50 feet of any perennial streams. Suitable monitoring objectives for the various type of monitoring described above might be developed from the following questions:
Implementation Monitoring Are the streamside management areas clearly marked and protected?
Effectiveness Monitoring Does the prescribed management action provide 80 percent shade?
Validation Monitoring Is 80 percent shade adequate to maintain stream water temperature at the desired level?
Another example demonstrates setting an overall objective, more specific objectives, and then relevant questions to be addressed during evaluation.
In the spring of 1995, researchers from the University of Wisconsin-Stevens Point launched a 4 year experiment by releasing 25 elk on the Chequamegon National Forest. This controversial project is designed to determine the effects and feasibility of an actual reintroduction of the species. A white cedar swamp, located very near the elk release site, was selected to monitor the effects of elk on vegetation. An established elk herd in nearby Michigan utilizes white cedar swamps extensively in winter and depends on white cedar for browse and thermal cover. In addition, white cedar swamps are noted as "hot spots" for orchids, some of which are quite rare; these orchids may be susceptible to herbivory by elk. The cedar swamp selected for monitoring is bisected by a 75 ft wide open corridor containing an Extremely Low Frequency (ELF) Antenna used by the US Navy for communication with submarines. The elk researchers hypothesize that the ELF line corridor, which is maintained open condition (dominated by upland brush, alder thicket, and sedge meadow communities) will provide the grassland habitat elk depend on in the spring and fall. Thus, it is postulated that this corridor will be heavily used by the elk.
The McCarthy Lake and Cedars Research Natural Area is located on the same Ecological Land Type (Ground Moraine Washed Till-Outwash Complex) as the "ELF Line Cedars" and is dominated by a comparable white cedar swamp. The McCarthy RNA will serve as a control or reference area for the monitoring efforts in the ELF Line Cedars. This cedar swamp is characterized as "old second growth". It is the largest of its type know from the Chequamegon (approximately 115 acres). Cedars are mostly 510" d.b.h. with occasional trees exceeding 20". The cedar swamp has recovered well from past disturbance (probably selective cutting near the turn of the century). White cedar seedlings are frequent on fallen, rotted tree trunks.
The overall objective of this monitoring program is to determine whether a small herd of elk and the ELF Line corridor are a threat to the natural dynamics of a white cedar swamp. More specific objectives are to 1) detect compositional and structural changes of the vegetation that may be attributed to the elk or the corridor and 2) detect changes in the breeding bird community resulting from any such vegetation changes. It is hoped that this monitoring effort will provide answers to the following questions:
Another example helps one understand these important first steps of tightly defining the monitoring questions, basing them on the resource goals and conceptual models of how we think the ecosystem works.
The recent, though long standing, controversy surrounding the northern spotted owl in the Pacific Northwest led to a scientific study (Forest Ecosystem Management Assessment Team 1993) and an environmental impact statement (Espy and Babbitt 1994) based on that study. The objectives were specifically stated in terms of habitat and other elements.
For example, one goal is to provide for maintenance and/or restoration of habitat conditions for the northern spotted owl and the marbled murrelet that will provide for viability of each species. Monitoring of the owl's habitat is based on the conceptual model of the owl's habitat needs. The Forest Ecosystem Management Assessment Team (1993) examined optional ways to meet the habitat goals. These options were evaluated against the "likelihood of achieving certain outcomes." These likelihoods were based on the conceptual models of how the researchers believe the owl and the marbled murrelet behave and relate to habitat conditions.
Monitoring proposed by the planners in the Record of Decision are those specific criteria used to evaluate the likelihoods of achieving certain outcomes. Owl habitat monitoring includes measuring the size, location, arrangement and amount of late-successional forest for each physiographic province in the owl territory. These were the stated variables the scientists examined to evaluate owl habitat. In turn, monitoring these variables should allow us to answer the question, "how well is owl habitat being provided?" Note that an underlying concept here is, "if we built it, they will come." That is, if the habitat is provided, the owl population will be viable. Validation monitoring and research examine the validity of that concept.
Effective monitoring programs must be based upon goals and objectives. Monitoring objectives oftentimes relate to the major threats and issues managers must deal with in resource management. Therefore, establishment of monitoring objectives may require an effort to define what those threats and issues are. There are various ways such an assessment could be carried out. The following example describes a process that was used successfully by the National Park Service (1988) to identify a number of the specific treats and issues impacting natural ecosystems throughout the Service and how those threats were linked to major components of the park's ecosystem. Since that time, specific monitoring objectives have been formulated and inventory and monitoring programs have been initiated in many of the parks to deal with the threats and issues.
EXAMPLE: To begin the assessment process of the myriad resource threats and issues confronting park managers, parks were asked to classify their resources into one of three different categories: "primary," "secondary," or "other." "Primary resources" were defined as resources specifically mentioned in the park's enabling legislation or proclamation; those central to the existence of the park or for which the park has become known; rare, threatened, or endangered species or officially designated critical habitat; or those resources which possessed outstanding aesthetic qualities. "Secondary resources" were defined as resources of value to the park as a whole, but not meeting the criteria for a primary resource. "Other resources" were defined as resources not contributing significantly to the value of the park.
The parks were then asked to complete a Threats Questionnaire for each source of a threat affecting the park's primary, secondary, and/or other natural resources now or in the foreseeable future. Among other things, for each threat source reported, parks were asked to indicate: 1) the resource categories affected by the threat source, 2) the impact level (severe, moderate, or low) of the threat source, 3) location of the threat, and 4) whether the effects of the threat source on the resource would be increasing, diminishing, or staying the same over the next five years.
The term "threat" was defined as a negative impact to park resources, values, and purposes; or to park management objectives; or to visitor experience. The severity of a threat to resources in a subcategory was measured in terms of how the visitor enjoyment of the resources will be affected, how the integrity of the resources will be affected, or how long the resources will be affected. The threat impacts were evaluated as follows:
Severe: The resource's value to the visitor will be lost for a generation (25 years) or more; or
The resource will be lost entirely or the ecosystem will be impaired to the extent that its normal functioning is disrupted beyond recovery or its recovery will not be possible within this generation (25 years).
Moderate: Visitors will be able to experience the resource's values, but their enjoyment will be reduced or the intended experience will be changed; or
The resource or ecosystem will be impaired, but will continue to exist and will be able to recover its original values within this generation (5 to 25 years).
Low: Visitors will probably not notice the change in the experience; or
The resource or ecosystem will be affected, but not impaired; or
The resources will recover within the near future (less than 5 years).
Two park-specific examples of how this process was used to assess and characterize environmental threats and issues are as follows. Note that in both examples, the assessment identified the source of the threat, its severity, and the components of the park ecosystem affected.
Everglades National Park Degradation of the quality of water delivered to the park from upstream sources is severely affecting several park resources, including fresh surface water, ground water, salt surface water, animals and plants. High levels of herbicides, pesticides, and fertilizers coming off private agricultural lands to the east and north of the park are causing algal blooms, loss of native algae, and alterations in the structure of aquatic communities.
Voyageurs National Park Regulation of lake levels in four major lakes for power generation and flood control is having severe impacts on native fish stocks, water birds, aquatic plants, and animals dependent on shore and marsh environments.
Once assessments of specific resource threats and issues were completed, park managers were then in a position to begin the formulation of specific objectives to evaluate the effectiveness of their mitigation activities.
This section addresses "how to" deal with monitoring with liberal use of examples. We hope the reader understands there are many ways to accomplish monitoring. That is why general concepts, considerations and examples are given here. The reader is reminded that in this business, "one size seldom fits all."
Once a monitoring method has been agreed upon, it should be used throughout the life of the project. It is very difficult to make comparisons when the process is changed halfway through a project. Again, it needs to be emphasized that the public be a part of the development of a monitoring plan. They are a source of valuable information.
The monitoring approach should be field tested to ensure objectives are being quantified. If monitoring is not helping identify where you are in respect to your objectives, modifications need to be made. It is not an end unto itself. Monitoring is a beginning and continuing process; it is not a process that ends.
The scope of the project has bearing on the approach to take. There is a continuum of designs ranging from pure single function efforts (e.g., timber) to linked-functional efforts (e.g., soil and water) to truly integrated efforts where data of all types are collected at the same locations using the same sampling design. With the movement toward holistic, ecosystem approaches to management, the trend has been away from functional efforts and towards integrated efforts. Are your monitoring needs focused on a specific ecosystem component or do they require information on multiple components? And if not now, will that change in the near future? A design that provides flexibility to be used for more than one narrow purpose may be worth pursuing.
Now that you have addressed all of these concerns and are ready to actually pick a sample design, be aware that there are usually many choices and that none may be by far the superior one, i.e., there may be several equally satisfactory approaches. While we will discuss some here, it is not a definitive listing. More information can be found in the sample design section of the chapter on topic 30, Data Management, Collection, and Inventory. The references provided there are excellent sources for monitoring designs as well.
Agencies are involved in hundreds of types of monitoring and evaluation activities. These can be categorized in numerous ways, including
It is not possible in this paper to describe all of these types and the various ways they can be conducted. That would require several volumes alone. So we have selected a few to discuss in hope that this will illustrate some of the issues and considerations that are shared in all the approaches available to managers. Let's look at the three types of monitoring that the Forest Service and the Bureau of Land Management are using in the area of natural resource planning.
One purpose of monitoring is to provide
information about whether we are doing what we said we were going to do.
This type of monitoring, labeled "implementation monitoring"
has two components--a " how much" question and a "quality"
question. First and often overlooked in importance, is the need to evaluate
whether or not projected or planned activities took place and whether or
not they took place where and when the projections an/or plan said they
would. The significance of this monitoring relates to evaluating how well
goals and expectations are being met. Incorrect interpretations can be
made without this information. Second, implementation monitoring will provide
information regarding the quality of any actions that do take place. The
Environmental Protection Agency labels this "compliance" monitoring.
Here, one looks at whether or not the standards and guidelines are being
A second type of monitoring has the
purpose of providing information about how well the goals and expectations
are being met. The "planned actions" and the "desired goals
and other expectations" are the references to which monitoring data
are compared. The comparisons and the conclusions reached from doing so
are the all important evaluation step. For example, on the Huron-Manistee
National Forest management direction for the Newago Prairies Research Natural
Area (RNA) includes closing roads to prevent vehicular traffic into the
RNA, and placing signs to post the RNA off-limits to ORVs. Signs were posted
and piles of stumps and roots were placed along access points to restrict
access. Effectiveness monitoring would examine road access points, count
recent vehicle tracks within the RNA, or use some other means to determine
if the road closures and posted signs had indeed been effective at keeping
The third type of monitoring has the purpose
of determining whether the underlying assumptions made in estimating outcomes
are valid. Although this borders on research, answers are of utmost importance
to both management and to our understanding of ecosystems. Due to the questions
being research questions, the design, data collection and evaluations are
normally done in close cooperation with researchers. An example comes from
the Allegheny National Forest
in Pennsylvania. The Forest Plan had assumed
that management strategies for the Allegheny hardwood forest type would
work in northen hardwood and upland hardwood forest types and would require
similar lead time. Recent monitoring indicates these assumptions are incorrect.
Data show that these types are experiencing many regeneration difficulties,
which in turn has led to a revision in the number of acres suitable for
harvest. Evaluation suggests the need to lower the earlier prediction for
timber production from the forest.
Another type of monitoring-stewardship checks on the health and maintenance of protected areas. The primary question asked is "Is the management of the area ensuring or maintaining the values of the area?" An example comes from the Research Natural Area (RNA) program administered by the U.S. Forest Service. Several regions of the agency have developed standardized approaches for checking and reporting on site conditions and management status of individual RNAs. Problem areas and potential threats to the values of the RNAs can be identified and solutions offered. For example, documentation of heavy dispersed recreational use of campsites at the Battle Point RNA prompted the Chippewa National Forest to increase monitoring to assess impacts, make visitor contacts for collecting comments for education about management concerns, and provide signing on potential restrictions on use.
Among the many considerations that go into the design of an effective monitoring program is the determination of the appropriate spatial and temporal scales with which to apply monitoring activities. These spatial and temporal scales are a function of the natural variability found in the system.
All indicators of ecosystem health considered for use in a monitoring program exhibit some degree of spatial and temporal variability. Temporal variability in natural systems occurs on a variety of temporal scales, and may be influenced by season, macroclimate, microclimate, natural succession, natural disturbance patterns, or other factors. Many water quality parameters, for example, generally exhibit a shortterm "pulse" related to snowmelt, and the exact timing of this pulse is often difficult to predict. As a result, water quality may show substantially natural variation over a period of a very few days. Many other commonly used ecological monitoring parameters vary on a fairly regular seasonal basis. For example, nutrient concentrations in hardwood tree foliage changes dramatically during the growing season.
The objectives will help determine the frequency of sampling and permanency of sample locations. For example to determine dynamics of growth and mortality in oldgrowth forests in the Pisgah National Forest, permanent plots have been established and are being sampled at 5-year intervals. Variability between years can be substantial and should also be considered in the sampling design.
The design of a monitoring program will build upon the goals and objectives of the manager. This is where one fits the appropriate science and technology to the question being asked. In many cases, this means that a scientifically sound sampling design is required to provide the necessary information. But as mentioned in the science paper on this topic, a poorly designed monitoring program can be worse than none at all. The fourstep process outlined there means addressing these questions:
Answers to these questions will assist the manager in selecting the appropriate sample design, including the number of samples to take and how often to repeat the data collection. A well designed sampling scheme allows the most amount of information to be developed at the least cost and the data quality are generally much higher because we are forced to systematize the data collection techniques.
Monitoring objectives or questions identify the information needs. From the information needs, one can identify possible attributes or indicators. Some of these are qualitative, such as condition, and some are quantitative, or measured. Some are mapbased and others are groundbased. When identifying attributes consider what different levels of intensity will gain in information, or how many times an attribute must be remeasured to give you the information you are seeking (seasonally, change over time, comparison between two areas). For example one breeding bird survey will give you one list of birds. Repeated surveys will be needed to establish variability of sampling, trends over time, etc.
While some indicators are observed directly, they often involve one or more observed attributes, such as an index. Identify indicators which address the problems of concern. Considerations are:
Often the list of desired attributes grows longer than the time and cost constraints allow. It may be necessary at this point to pare down the list of attributes to a manageable level.
After setting the monitoring and evaluation objectives and identifying data needs, the next step to begin your effort is not to rush out and begin collecting new data. This is a costly fault that has plagued agencies for years, but cannot be justified. The first implementation step is to conduct a search for existing data that may be useful for your purposes. This provides the opportunity to identify previous successes and failures locally, to determine local variability for sample size determination, and to identify alternative measures from the literature. Has another part of your agency or another agency collected the data? It may not be in exactly the same form that you need, but data manipulation is usually much less expensive than setting up a new system and collecting it as if from scratch. However, avoid using prior data that does not meet the objectives. If the data do not exist, are similar data being collected elsewhere (either within your agency or outside of it)? If so, check out the sampling design and quality assurance protocols and adopt them if they meet your needs. This is particularly important when monitoring data must be aggregated to answer questions at a broader spatial level (say regional or national) than may be needed for your immediate concerns. If the data do not exist and if no one else is collecting similar data elsewhere, then it is appropriate to design a monitoring system for your needs.
As Federal agencies standardize their data in conformance with Executive Order 12906 (U.S. Executive Office of the President 1994), managers will be able to query the National Spatial Data Infrastructure clearinghouse as one source of data and data bases. Other places to look are others units within your agency that are confronted with similar issues, archives, existing studies from all sources such as universities, private organizations and other government agencies, as well as literature searches.
Then one can address the monitoring program design; that is, how will the data be collected, including sample frames, methods, reporting protocols, etc. Much of the following discussion was adapted from Tyrrell et al. (1996), which is an excellent reference guide for managers seeking to install scientifically sound monitoring and evaluation systems.
The first step in any sampling design is to define the population of interest, which is typically the study area. However, if inference is to be made for similar areas in the region, then the population of interest is the aggregation of these areas. Knowing the population of interest helps to define the sampling frame the sample area or set of all possible sample locations. For example, if you are interested in the permanent residents of a town, one could use the phone book. This sample frame is not perfect as some people have no phones, some have unlisted phones and folks move in and out of town. Nevertheless, if we are interested in most people and residents in this town, the phone book gives us that. If the questions are about the extremes, the phone book may not be adequate.
The sampling objectives are typically stated in terms of estimating some population value within a specified level of precision, such as estimating the number of black walnut seedlings per hectare, plus or minus 10% at the 95% confidence level. This forms a 95% confidence interval with a range of (Y0.1Y, Y+0.1Y). This means that if the sample were repeated a number of times, 95% of the confidence intervals constructed in this way are expected to include the true mean. For a given sampling design, small confidence intervals reduce risk in making management decision, but require large sample sizes. Before the sampling design and sample sizes can be determined, the sampling methods and sampling unit (plot) design must be specified.
Conceptually, the population is divided into all possible sampling units. In order to make comparisons and to estimate precision, sampling units must be of fixed size and shape. The sampling design is used to select a probabilistic sample of the sampling units. As a result, statistical estimates of population attributes can be produced with an estimate of their reliability. If the sampling units are located subjectively, or the size of the sampling units are altered by the field crews, then no estimates of precision can be produced. Estimates of unknown reliability are of little value.
Different sampling units or plot designs are often used for different ecosystem components. Many, if not most, components can be sampled with plots which cover a fixed area of ground.
The sampling design specifies how the sampling units are selected. Many possible sampling designs exist, but simplicity is important, particularly for longterm monitoring. Recommended approaches include systematic sampling, simple random sampling, and stratified random sampling (Cochran 1977). Systematic sampling distributes the sampling units in a fixed manner, usually as a grid, across the sampling frame. This means that the plots are evenly distributed. Technically, this means that the precision cannot be computed accurately, but experience indicates that this is not a problem. With simple random sampling, plots are located randomly across the sampling frame. This permits simple computations of both the estimates and their precision. Finally, stratified random sampling uses mapbased or aerial photography to divide the population into strata of known area. Simple random sampling is conducted within each stratum, then the strata estimates are combined into a single estimate for the population. Results are almost always more precise with stratified random sampling, but this involves classification and some good maps. The estimators and their variances are the same for systematic and simple random sampling. They are a bit more complicated for stratified random sampling (Cochran 1977).
Based on the sampling unit and the sampling design, the sample sizes can be computed to achieve the desired precision level or the specified cost. Sample size computations depend on the attribute's variability for the given plot design, the confidence interval, the confidence (1-alpha) level (e.g., 95%), and the sampling design itself. (See Tyrrell et al. 1996 for formula and more detail). Based on the sample size calculations, the survey cost constraints should be revisited. If necessary, adjust the objectives, constraints, or the precision objectives.
The objective for most monitoring efforts is to be able either to detect a change or to determine whether the current value is within some specified range. This is why the precision objectives are stated as confidence intervals. As stated, the alpha level specifies the probability of detecting a difference when no difference exits, which is referred to as a Type I error. This probability is often set at 0.05 which translates to a 95% confidence level.
The other type of error is the probability of not detecting a difference when one exists. This error, often referred to as a Type II error, is represented by the betavalue. The power of the test equals (1-beta). This is a measure of the design's ability to detect real change. Assessing the power of a design is often overlooked, but is often critical to the monitoring system (Fairweather 1990). Unfortunately, low alpha-values result in high beta-values, and vice versa, so there is a tradeoff. Also, beta-values can be difficult to compute, thus statistical software must be used to compute its value based on the magnitude of the change to detect, the variance, and the alphavalue (Taylor and Gerrodette 1993, Goldstein 1989).
It is always possible to increase the intensity of the monitoring by adding observations until some statistically significant change is observed. However, this change may not be meaningful. The definition of meaningful change is highly dependent on the identification and quantification of natural sources of temporal and spatial variability. The sources and degrees of variability must be established before the power of statistical analyses needed to detect meaningful levels of change can be determined (Cohen 1988).
The next step is to plan the data collection and field work. Once the attributes, sampling unit, and sample size have been decided, then the sampling unit locations can be determined and located on maps, photos, or using coordinates. The details of the monitoring must be worked out, and the field manual written. Provision should also be made for sample handling and storage, as well as for data handling and storage. The resources needed to complete the survey must also be coordinated.
The primary resource is the people to collect the data, both in the field and from existing sources, such as databases, maps, and geographic information systems. They must have the expertise needed or be easily trained to collect accurate data. Second, they must have the equipment suited to the task. This includes measuring devices and instruments, as well as personal equipment such as hardhats and vests. Third, they must have maps, aerial photos, a field manual, and other background information. Finally, they must have tally sheets and/or portable data recorders. Other aspects of planning include obtaining the necessary permits.
The data collectors must be trained on the specifics of the monitoring methods. Training is another step that is often overlooked in interest of saving time and due to the false impression that training is not needed. Even experienced crews refine their skills with each training session, and it helps ensure that data are collected consistently between crews. It also provides an opportunity to raise questions and to provide feedback to the survey planner. The training session should be carefully planned. This increases the likelihood that the data will be collected carefully.
Now you are ready to locate and establish the sampling units in the field and to collect the data. While this will usually by done on paper tally sheets or using portable data recorders, the survey planner may also need to provide for the collection of sample material, such as soils, vegetation, and insects. Collection bags must be provided to the crews, and appropriate storage locations must be provided back at the office. Some samples, like soils, may require refrigeration or immediate analysis. It is important that a clear labeling system be used so that material can be tracked and related back to the site from which it was taken.
Sampling approaches vary by the type of geographic distribution. For those kinds of things that occur all over the landscape, a grid technique is usually best.
For example, sampling vegetation can be points that occur where north-south lines intersect eastwest lines, such as staring points for townships using the township and range system of coordinates; or UTM coordinates. If a higher density is needed, the points of intersection along section lines or divisions of the UTM could be used. The Forest Service combines a grid system to locate points and a plot system to complete the area identification. A 3.4 mile spacing exists for the national monitoring questions. This density of points is expanded by halving the distance to 1.7 miles to answer many forest health questions on a national forest basis. This could be halved again to 0.85 miles if the spatial unit is a ranger district or major watershed. This sort of approach allows for the compilation of data up and down spatial scales, as long as common data collect protocols are used.
For those geographic distributions that are linear, such as streams or streamside riparian areas, a different sample frame must be used.
The Pacific State Marine Fisheries Commission keeps a list of all stream reaches within their area of interest. The definition of a reach is agreed to, in this case where to stream merge marks the boundary between all perennial streams. Random samples of stream reaches could be drawn until the appropriate sample size is reached. Alternatively, one could determine the need for a 10 percent sample and beginning with one stream at one geographic end of the area of interest count to the tenth reach and then continue to the twentieth and so forth. One could stratify stream by characteristics such as anadromous or not or granite controlled or not, or "used for irrigation" or not. The principles are the same.
Some questions have to do with the geographic distributions themselves. Examples include such things as densities, degrees of connectiveness or conversely, degrees of fragmentation and other factors relating to the sizes, shapes and other patternrelated characteristics. Monitoring to answer these questions take a whole different design. Density (numbers items per unit area) can be determined by counting the number of individuals within the plot, or by using a distance method, such as the pointcentered quarter method. Cover (proportion of the plot covered) can be assessed using a series of points or lines, ocular estimation, quadrats, line transects, or photography. Frequency (proportion of plots on which something occurs) can be assessed using plots or nested plots. Biomass can be assessed using clipped plots or through the use of biomass equations or tables which relate the biomass to more easily measured attributes, such as tree diameter. Spatial patterns of populations and communities can be assessed by mapping individual or community locations using compass and tape, global positioning systems, or aerial photographs. A good overview of all of these methods is given in Chambers and Brown (1983).
One example deals with the question of monitoring fragmentation of habitat. The "late successional/old growth" habitat so much in demand in the Pacific Northwest is thought to be fragmented. Measurement of that fragmentation can be by air photography or satellite imagery. Common geographic statistical techniques, such as nearest neighbor analysis, allow repeatable measurements. In fact, with that particular technique many of the common GIS packages (e.g., Harvard Grid) have the means to calculate a fragmentation characteristic.
Another example, relates to communities. Although applicable to plant communities as well, the focus here is human communities. One common monitoring question is, "are local communities and economies experiencing positive or negative changes that may be associated with federal forest policies?" While several approaches can be pursued, one is to first identify what is meant by a community; second, list those communities; and third, develop a random sample from that list. More likely, a stratified random sample would be developed based on the prevailing conceptual model of how communities are affected. While perhaps not rigorous science, this is often the approach in monitoring, as the questions differ from research questions.
The approach one takes will also depend on the mobility of what is to be monitored. For example, there are great differences in monitoring stationary vegetation (e.g., wildlife habitat) as compared to wildlife population dynamics. For example, the approach described by Ralph et al. (1992) has been adopted as a national standard for point counts for breeding birds surveys. Although it does require the participation of an expert birder, a site survey can be completed in a single morning. Repeating the survey twice more during the breeding season will detect more species. Species richness, abundance and distribution can be derived from this survey.
Another example involves two approaches to herptile and small mammal surveys. The first method, visual encounter survey (Heyer et al. 1994) is less labor intensive, but does require that an experienced biologist visually search microhabitats in a systematic way. This method has been shown to work well for forest anurans (frogs and toads) and salamanders. Longterm monitoring can show phenology of presence and activity. The mean number of animals can also be compared statistically and species lists compiled for the area can be compared with those of other areas. The second method is pitfall trapping; a widely employed technique for surveys of amphibian and reptile diversity and abundance. Small mammals can also be sampled this way, although usually with high rates of mortality. In pitfall trapping, plastic buckets are installed in the ground on a grid system (Corn and Bury 1990). Drift fences, if feasible, are recommended since they will dramatically increase capture success. With this method, presence or absence of species, capture totals and relative abundance can easily be calculated. Estimates of population size are possible but probably only for abundant species. If more detailed information is desired, one can incorporate markrecapture methods (e.g. toeclipping of salamanders).
In summary, the principles of designing data collection methods are:
One of the weak links in ecosystem monitoring is communication. An ecosystem may be under the jurisdiction of many State, private or Federal agencies, each having its own goal or objective. A forum could be developed where these different groups can sit down and discuss a general overall management strategy which can be used to obtain a healthy ecosystem. The exchange of ideas broadens everyone's understanding. (See MT4, Regional Cooperation chapter).
A good example of a joint effort that works is in New Mexico. The Bureau of Land Management (BLM) needed to develop a rangeland monitoring program which first would indicate ecological range conditions of its lands and second give indicators of problems if they existed. The local users were very skeptical of the BLM's motive. The local BLM office drafted an initial monitoring program and presented it to the users, the state agencies and the universities. Each group made modifications to the monitoring plan. The BLM took these modifications and incorporated them into the program and then invited all groups to a field test to see if it would work. The BLM and the University explained what each step meant and how the data would be used. Each user went out on his own allotment and a BLM employee explained the process one-on-one.
The program was and is successful because everyone was involved from the beginning. The objective was clear and everyone understood and trusted the scientific methods behind the process. This story illustrate the point perhaps most often missed by designers of monitoring efforts: monitoring's main purpose is to satisfy publics and agencies that the land management agency sincerely cares and is honest. This caring and honesty is reflected in the inclusion of nonagency people and people from "other" agencies in an open shared monitoring experience.
In developing a quality control program for monitoring and evaluation, there are several questions we need to ask ourselves. First, who should be involved in quality control? Here is a list of potential participants in developing a quality control program: managers, scientists, resource specialists, users or stakeholders, and interested parties from the general public.
Managers should be involved with input in development of issues that need to be addressed, the time that can be allocated toward working on the issues and the amount of money that can be spent in gathering data to make informed decisions on the ecosystem.
Scientists should be involved in developing models, techniques, processes, etc., that can be used in making statistically sound resource decisions. They have the expertise needed to formulate the resource specialists' questions, needs, and desires into scientifically sound processes.
The resource specialist should be involved by helping managers identify and clarify what the issues truly are and be able to take the scientific methods developed and monitor the ecosystem.
The users, stakeholders, and interested public should be involved by ensuring that the issues are really issues and that the product actually answers the questions of the issue. The public needs to accept any processes being used. If they mistrust the processes they will not accept results.
The second question that should be asked is what should be addressed in quality control?
Data collection should be addressed and several questions should be asked about the data. Are the data that are being collected needed to answer the questions raised by the issue? Are the data cost efficient? Does it cost more to gather the data then the data are worth? Can the data provide an early warning to problems with the ecosystem or even the monitoring process itself? Did all groups accept the data being collected and the methods being used to collect the data? Do you have acceptance on the evaluation process that will be used?
Social and economic impacts should also be addressed in quality control. Several questions should be asked at this point. Does the local community want a pristine environment? Does the national interest outweigh the local concerns? Can a compromise be developed?
A good example of this would be the Grand Canyon. Many locals would want to see it left natural because of revenue generated from tourism, while water users and power users in nearby states would like to see dams built for water diversion projects and power projects.
Quality controls should be able to be sustained over the life of the project. The quality control methods should not be so expansive that they cannot be continued along with the monitoring. Quality control methods should give us early warning to question whether we are doing what we said we were going to do.
Again, the public needs to accept the quality control methods; without public support, no matter how scientifically sound the program, the project at best will have limited success.
The last question that will be discussed is when should quality control measures be developed? Quality control measures should be developed from the very beginning along with your monitoring plan and evaluation process. It should be an integral part of the process of getting from the issue to the decision making process.
The following are several case study summaries where quality control methods worked and pointed out flaws in monitoring programs.
Case Study #1. This study deals with determining suitability of sites for prairie chicken nesting. One of the key elements in good nesting habitat is the height of bluestem and dropseed grass plants. It was determined that every tenth (10th) plant on an existing production transect would measure the height of the nearest bluestem or dropseed plant.
This process was completed on a series of studies and the result analyzed. It become apparent rather quickly that this method would not work because the diameter of the grass bunches was not taken into consideration. The results were showing good nesting habitat by plant height but in reality it was not. In many cases the nearest plant was just a single stalk. The chickens need a 24inch based diameter as well as an 18inch height requirement. Because the monitoring method was tested on a small series of studies an early warning of a problem was detected from the beginning. Five years of data was not collected before the problem was identified that the data were not sufficient to make decisions.
Case Study #2. This study deals with involving all interested shareholders to ensure a quality product. In developing a monitoring and evaluation plan these parties should have an active part in the development of the process and feel comfortable with the end product. A final review with each group will give them a sense of ownership which will reduce mistrust once decisions are made. BLM in Roswell, NM has developed a grazing monitoring program. It was reviewed by the State Land Office, State Agricultural Department, grazing organizations, and environmental groups. Their suggestions were incorporated into the plan. A State Grazing Task Group made up of university professors was set up to help arbitrate any problems. When there was a dispute over findings and decisions, leaders from each group were called in to see if monitoring procedures were followed. Using this process, litigation has been avoided.
Case Study #3. An example of how a proper understanding of geology/geochemistry and the timely monitoring of environmental conditions could have averted a significant environmental catastrophe is the selenium problem at Kesterson Reservoir within Kesterson National Wildlife Refuge (KNWR). Located in the San Joaquin Valley of central California, the KNWR served as the terminus for an extensive system of canals and subsurface drains that removed high salinity waters from this important agricultural area. During the early and mid1980s, the U.S. Fish and Wildlife Service noted unusual instances of bird and fish mortality as well as decreased birth rates and birth defects of waterfowl nesting in the area. These problems were eventually attributed to selenium toxicity. Factors leading to the selenium problem at KNWR include (1) the natural occurrence of selenium soils of the western San Joaquin Valley, which are derived from the adjacent California Coast Range, (2) the presence of watersoluble selenium salts in the soil, (3) the irrigation methods used to maintain agricultural activity in the valley, (4) the regional drainage system which led to the evaporative preconcentration of selenium in the KNWR, and (5) the bioaccumulation of selenium in the food chain. The selenium problems at KNWR and some other areas in the arid west could have been anticipated by early monitoring of selenium concentrations in drain waters and be a careful evaluation of the regional geology/geochemistry prior to agricultural development. Where existing irrigation projects face selenium problems, the careful monitoring and redesign of returnflow systems can help mitigate the situation.
Here is an example dealing with measurement error that resulted from lack of standard protocols. Over the past five years, USDA Forest Service and USDI Bureau of Land Management fisheries biologists and hydrologists struggled with monitoring fish habitat in the mid-Columbia River. They settled on some well known, intuitively obvious measures of anadromous fish habitat: width/depth ratio; stream temperature; percent fine sediment; coarse woody debris size and quantity; pool frequency and quality, width, depth and cover; and bank stability and lower bank angle. Initially the assumptions were made that all these professionals would identify the stream reaches in the same way and that all would measure each of the above indicators in the same way. When the results from the various surveys began coming in, it became obvious that each professional measured these "common" variables differently. The results were not comparable. Now, the agencies are developing protocols with much the same rigor that is being used in the vegetation inventories and forest health monitoring.
Another way to handle observer error is through "redundant" observation. An example follows. Hydrologists in the Northern Region of the Forest Service developed a channel stability rating approach. At first they tried three indicators of channel stability but found, even with instructions, that no two hydrologists could arrive at the same rating without consulting each other. They also found that wording of protocols is more difficult than first imagined. Their solution hit upon the value of "apparent redundancy." That is, they identified 15 variables that indicate various aspects of channel stability. Experience showed that while two professional observers often disagree on each variable, the overall rating were within two or three points of each other. This approach allows for differing judgments with a minimum of training of the observers. The drawback of this approach, of course, is that the differences in judgment can accumulate instead of canceling each other out.
One last comment on this example has to do with costs. Many people stray from these redundant observations because they feel the redundancy costs too much. Experience shows that many observations are not truly redundant; that each variable is measuring different traits. Further, experience shows that it is less expensive to make many observations at one site than it is to make a few observations at many sites.
As part of a quality control program, a commitment must be made by researchers and other technical persons to maintain the collection and management of quality data sets over time. Only by maintaining consistency in the collection, analysis, and management of longterm datasets can we accurately detect trends in ecosystem conditions. Sound data management practices are the key to successful quality assurance and a credible monitoring program. Many data sets have been lost because of inconsistency, and longterm data sets are especially vulnerable. The focus of this section is on how to manage data sets once the information has been gathered in the field.
The major objectives of data management are to ensure that data are (1) stored and transferred accurately and (2) secured from loss or damage. To accomplish those objectives, sound data management must include a number of key steps. Those include: (1) data entry, (2) data verification, (3) data validation, (4) data documentation, and (5) data archival. The following sections are excerpts from a comprehensive data management protocol developed by Shenandoah National Park (Tessler 1995) as part of their longterm ecological monitoring program.
Data entry refers to the initial set of operations where data written on paper field forms are transcribed, or typed, into a computerized form (i.e., a database or spreadsheet). Where data were gathered and/or stored digitally in the field (e.g., on a portable data recorder), "data entry," in the context used here, is that stage where those data are transferred (downloaded) to a database in an office computer where they can be further manipulated. The specific, unique procedures for accomplishing electronic data transfer will not be discussed here, but the remaining procedures apply equally to those data.
Data entry is really a simple process which is easy to perform and follow. It is not a trivial operation, however, because the value of the data is determined by accuracy, which is still unconfirmed and at risk at this stage of computerization. The single goal of data entry is to transcribe the data from paper records into the computer with 100% accuracy. Proper preparation and adherence to the steps summarized in Table 1 should guard against having to do the same work over again during data verification.
Data verification immediately follows data entry and involves checking the accuracy of the computerized records against its original source, usually paper field records. While the goal of data entry is to achieve 100% correct entries, this is rarely accomplished; the "verification" phase checks the accuracy of all entries compared to the original source, and identifies and corrects any errors. Once the computerized data are verified as accurately reflecting the original field data, the paper forms can be archived and all further activities involving the data can be done via the computer alone. The recommended protocol for data verification is summarized in Table 2.
Although data may be correctly transcribed from the original field forms (data entry and verification), they may not be accurate or logical. For example, finding a stream pH of 25.0 or a temperature of 95°C in a data file is illogical and certainly incorrect--whether or not it was properly transcribed from field forms. This process of reviewing computerized data for range and logic errors is the validation stage. This can be done during data verification only if the operator is intimately knowledgeable about the data. More often this should be a separate operation carried out by a project specialist after verification and with the goal of identifying both generic and specific kinds of errors in particular data types. Any corrections made to a dataset reflecting logic errors will also require returning to the original paper field records and making notations about how and (now) why those data were changed.
Unfortunately, there are no stepbystep instructions possible for data validation, because it might be considered more of an art than a standardized quality control procedure. Nonetheless, it is a critically important step in the certification of the data. Invalid data commonly consist of slightly misspelled species names or site codes, the wrong date, or outofrange errors in parameters having welldefined limits (e.g., elevation). But often, errors may occur as unreasonable metrics (e.g., stream temperature of 70°C) or impossible associations (e.g., a tree 2 feet in diameter and only 3 feet high). These types of erroneous data represent "logic errors" since using them produces illogical (and incorrect) results. The discovery of logic errors has direct, positive consequences for data quality and provides important feedback to the methods and data forms used in the field. Validation, therefore, is not a step to be ignored until after statistical analyses reveal problems with the data.
Wherever possible the data entry application should be programmed to do the initial validation. The simplest validation to perform during data entry is range checking, such as ensuring that a user attempting to enter a pH of 20.0 gets a warning and the opportunity to enter a correct value between 1.0 and 14.0 (or better yet, within a narrow range appropriate to the study area). Not all fields, however, will have appropriate ranges known in advance, so knowledge of what is "reasonable" data and a separate, interactive validation stage is still important. The data entry application should also use "popup pick lists" for any standardized "written" items where spelling errors can occur. For example, rather than typing in a species name (where a misspelling can generate a "new" species in the database), the name should be selected from a list of valid species, and "picked" for automatic entry into the species field. Again, not all written fields can use a list, but where they can be used they should be.
One of the most important activities of rigorous data validation, however, is to return to the original data sheet (and any associated the printouts) to make corrections and notations about the errors that were found and fixed in the computer files. Without annotating the original field forms, the computerized and paper records are out of synch. If this is discovered without adequate documentation explaining the differences, all of the data are rendered suspect.
Examples of Validation Discoveries
Below are four examples of logic errors discovered in Shenandoah National Park datasets. These should all be interesting and informative to the active data explorer. They demonstrate how errors can hide, and some generic and specific approaches to finding them. Remember these are illustrative and are in no way meant to criticize the work done on the Shenandoah. Similar examples can be found anywhere data are being collected.
Wrong year. A simple typographical error during data entry can create a logical "set" of data for a year in which samples were never taken (the same can happen for month). This can become cryptic if the data are sorted by date before verification (not a good idea), thus the entry moves away from its true neighbors. If sorting creates the appearance of "missing data" where those records should have been, the appropriate corrective action during verification might actually create duplicate records in the file rather than fix the ones that were wrongleaving two problems instead of one. A summary analysis reporting the count of total records for the dataset will also be correct. Summary explorations for the number of dates per year, or the number of samples per year will detect this kind of error by revealing a "year" that didn't belong, and the rest of those data records reveal which ones need correction.
Cryptic duplicates. Shenandoah contracted out invertebrate identifications from its stream monitoring program. The park received from the contractor printed tables containing species codes, names and counts in each sample, which were entered into the computer. Cryptic duplicates occurred in the data files when a single sample contained two entries for the same species (the identifier didn't realize they already had a line for that species when doing the counts, and added a second line later in the table). The data verification process correctly confirmed the separate entries but did not recognize that they should be pooled for that sample. Summary counts of the number of "species" for that sample also showed the same number as lines of original data, appearing correct. However, a count of the number of unique species (i.e., a "count distinct" query) for the sample showed one less than the line count. Returning to the original data form and comparing each line with the others for that sample eventually revealed the error of duplication, and the data file was corrected by pooling the abundance values into the first record of that species and then deleting the second. The original printed data table (our original "form") was then also corrected. Here, two different methods of making counts of the same item (species per sample) were used and compared to find the discrepancy.
Wild temperatures. Stream temperatures can show really wild variation and yet be completely verifiable and valid. For example, some older data, or the occasional spurious recent record, may have been taken in Fahrenheit rather than Celsius. There is a big difference, obviously. This is really a protocol problem and not a data question, but where quality control procedures during data collection were lax, these types of errors are often found only during data validation or (more annoyingly) analysis. Routinely producing a boxplot or histogram of numerical data will reveal dramatic outliers, and when the original data forms are consulted, true outliers vs. errors in measurement scale or units become apparent, as does the correction for the files (convert the measurement to the appropriate units).
Trees that shrink. Shenandoah's vegetation monitoring program includes remeasuring trees at permanent plots every five years. The park botanist discovered that some of these remeasured trees were getting smaller--recent DBH's (diameter at breast height) were less than the original measurements five years before. Obviously, tree trunks of live trees don't get smaller. Additional examination of the dataset revealed that the data were entered accurately (verifiable), but that there appeared to be slight-to-moderate differences in the accuracy and exact methodology used by current vs. previous crews. A "Search and Compare" program was then written to parse the data and identify and scale the differences between trees, revealing the extent of the "damage" in the data. This, unfortunately, was not a problem that could be fixed by editing the data files. Rather, it revealed a previous violation of protocol standards resulting in data of poor quality and rendered useless for their original purpose.
Documentation of data sets is probably secondary in importance only to verification of the accuracy of the data in the files. Without informative and complete documentation, the content, quality, extent, known cause of variability and utility of the data remain unknown.
Documentation of monitoring data sets typically involves capturing several types of descriptors, such as information about the location where the data were collected, individuals involved, and time period over which the data were gathered. The data documentation format suggested by the National Science Foundation (Gorentz 1992) for biological field stations (Table 3) should meet the needs of most monitoring programs.
A subject closely related to documentation of monitoring data sets is that of metadata. The term metadata generally refers to "data about data". Metadata files allow users of a particular data set to interpret data contents, structure and format for various applications. The Federal Geographic Data Committee has developed a metadata format for spatial data transfer and requires that all spatial data collected after 1995 be described in accordance with that standard. The National Biological Service has developed a draft metadata protocol for nonspatial biological data sets. [See also: Metadata Data Entry Program: MetaMaker] Both of those metadata formats are beyond the scope of what can be presented here. Readers are encouraged to consult the appropriate publications.
Data archiving is the process of making and maintaining copies of master data for the purpose of secure storage and easy retrieval. Master data are uptodate copies of data files that are documented and fully errorchecked plus associated metadata. Changes and other activities with data are never made to the master data but use copies of the master data. Archiving therefore is done with master data. The simplest archiving procedure is to have at least two locations where master data and their documentation reside. Additional copies of data that represent "milestones" can be similarly archived with current master datasets. The media for archival can be either diskette or tape or both, and the content of those media should always be clearly labeled. Additionally, a tracking log should be maintained by each project manager that identifies the current master data set contents and extent, and the location of all archival master copies.
Table 1. Summary of the data entry protocol developed by Shenandoah National park for longterm ecological monitoring studies (Tessler 1995).
Table 2. Summary of the data verification protocol adopted for longterm monitoring program by Shenandoah National Park (Tessler 1995).
Table 3. Example of dataset documentation format suggested by National Science Foundation (1982).
Monitoring programs are often ineffective because they fail to devote adequate attention to the establishment of clear goals and objectives, formulation of technical program design, and the analysis and synthesis of data that are relevant and accessible to decision makers and the interested public. Proper analysis of monitoring data should result in the following (National Academy of Sciences 1990):
The National Academy of Sciences (1990) also identified several issues relating the management and analysis of environmental monitoring data, including:
Data analysis involves summarizing the large volumes of data into meaningful statistics that can be interpreted (Tyrrell et al. 1996). The tendency is to develop all possible statistics, but then the interpretation becomes an overwhelming task. Instead, return to the objectives and determine the attributes or measures that are key to the decisions being made. Remember to isolate these attributes by removing other sources of variation, such as soil and site conditions.
Many software packages are available to perform the analyses, such as SAS, SPSS, BMDP, and Minitab. Do not use the database software to perform the analyses, because they are set up to provide only the estimates and not their variability.
If at all possible, the analysis should be performed by someone familiar with statistical analysis in collaboration with the survey planner. This provides the planner with the information needed to perform the interpretation of the results. It is this step where the conclusions are drawn regarding the monitoring objectives. Also, data may be available to indicate the drivers or causes of any changes. Finally, the results must be presented in tables and in graphs in such a way that others can draw their own conclusions from the data.
Evaluation is the process of converting monitoring data into information and then into knowledge. It is a valueadded process that provides managers with what they need to make sound decisions. While it occurs after the actual data are collected and analyzed, the evaluation process should have been determined at the outset when objectives were being set. It forces discipline in monitoring when one must specify in some detail how the data will be used. Evaluation should not be done just for the sake of evaluation or because we've always done that. A clear need for action must be demonstrated before time, money, or resources are expended on monitoring.
Often not well planned, the evaluation step is of equal importance to the original monitoring. The flow of information from the time it is recorded in the filed or on the map until the summary statistics are examined must be managed. Often we assume that this is so obvious we neglect giving it the attention we must. Some go so far as to say the lion's share of the monitoring cost occurs in this information management step. Further, they point out we spend the least money and time on evaluation and yet it is as important as the monitoring itself. The evaluation should use appropriate analytical tools, be statistically sound to meet the stated objective(s), and be simply displayed.
In developing the evaluation process, you will have to make some value judgments on your capabilities to determine whether or not you are achieving the objective(s). The results may clearly address the questions raised and meet the stated objectives. However, if the results are inconclusive, either the monitoring protocol needs to be modified to provide more precise results, or the management or monitoring objectives need to be modified. It may be something as simple as taking more plots or monitoring for one or two more years so that the ecosystem response will be more apparent. Alternatively, the precision levels may need to be modified. Other possibilities are to restate the management and/or monitoring objectives to reflect the new information. Other possibilities are that the measures (indicators) did not address the problem, or that the monitoring assumptions may not have been valid. Evaluation of these possibilities is key to the adaptive management cycle.
The methods of evaluation should incorporate an early warning system for change. It should objectively measure success or failure, quality control and quality assurance.
Assessment of monitoring data should be conducted on a scheduled time frame using the same assessment method at the beginning and end of the project. The assessments should be conducted on a continuous cycle. When the assessment process is completed a clear decision should be easily derived.
The final step is to make the decision about the future management of the study area. If the monitoring indicates that the ecosystem is outside of the planned response zone, then resource managers must determine an appropriate response. If the ecosystem is responding as desired, then the current management plan can be followed, and monitoring can continue as needed.
If the results are inconclusive, the resource manager has several choices. First, the monitoring design can be altered to provide more precise results, such as taking more plots or sampling for a longer time. Two, the monitoring objectives can be modified to alter the precision or threshold levels. Three, the management of the area can continue in the absence of strong statistical conclusions, resulting in increased risk. However, following the steps outlined in this chapter will help reduce the occurrences of inconclusive monitoring efforts. Careful planning will increase the odds of getting the answers on the first try.
Ultimate decisions are often complicated by numerous factors, some of which may have no relation to the monitoring and evaluation results. These results, however, will hold up to public and judicial scrutiny, and a wise decision maker will use them to his or her full advantage.
Here is a case study dealing with grazing decisions and the continuing cycle of monitoring.
In 1985, representatives from the Roswell Resource Area (RRA) and District Office of BLM went to New Mexico State University (NMSU) to discuss a procedure for analyzing and interpreting rangeland monitoring studies. A detailed presentation was made to members of the Range Science Department, the Range Improvement Task Force, the NM Department of Agriculture and the State Land Office. Approximately two weeks later the same presentation, with some suggested changes from the first meeting, was made to the Roswell district Grazing Advisory Board.
The procedure takes into consideration all data collected for the purpose of arriving at an estimated stocking rate for an individual grazing allotment.
As was explained at these meetings, one of our primary objectives was to determine range condition initially and then again in five years to disclose any problems or trends under the current stocking rates.
The procedure converted the range condition rating into an estimated stocking rate. Everyone at these meetings agreed that this would be an acceptable approach. It was agreed that our other studies data, i.e., precipitation, actual use, utilization and production would be analyzed and used to support and adjust if need be, the stocking rate determined by range conditions.
Our range condition data are our most reliable information. We concentrated more heavily on this portion of data collection. A cluster of three transect lines were run through the key areas in each pasture. This increased the transect's reliability over the standard oneline transects used for utilization.
Five years of monitoring studies have now been completed on 120 allotments in the Roswell Resource Area. All increases and decreases were negotiated through voluntary agreements with the livestock operators. The analysis of these data has resulted in the following decision/agreements:
Some concern has been expressed over our method of analyzing the studies data. The following analysis appears to support our method, plus the fact that out of 120 allotments, agreement has been reached on 117 or 97%. Decisions were ultimately reached on all allotments with no appeals.
It is interesting to note that on 51% of the allotments studied, the stocking rates determined by the studies were below the numbers agreed upon; and on 49% of the allotments studied the stocking rates determined by the studies were equal to or above the numbers agree on.
BLM did not stick hard and fast to the exact numbers determined by the studies. In most cases, we used 10% as an acceptable variance.
Our responsibility is to make adjustments where indicated by the studies data. In most cases we have been able to stay within the average numbers grazed over the study period. Studies will continue for an additional five years. This will, among other things, tell us what the sustained trend is over a 10-year period.
This process has continued, to date, through three complete cycles, with over five years of data. Objectives continue to be finetuned and monitored.
The objective of this case was to develop methods for determining grazing stocking rates. The decisions to be made were changes in stocking rates. A monitoring program was developed by all interested parties and assessments were made every five years. These assessments finetuned the objective and decisions and pointed out changes that needed to be made in the monitoring process.
We have discussed the significance of monitoring and the importance of doing it correctly. The question remains as to how to institutionalize monitoring into landmanagement practices. In this section we suggest strategies to institutionalize monitoring from several different perspectives.
Monitoring must be endorsed both from the top down and from the bottom up in a given organization in order to be effective. The importance of monitoring needs to be made clear by upper management so that it can be incorporated into the organization's goals and landmanagement planning documents. To establish monitoring as a routine activity, monitoring endeavors could be made formally part of personnel duties or could be listed as a separate budget item. However, the best results could be achieved if there were some incentive to conduct well-designed unbiased monitoring exercises. One suggestion is to build a reward system around monitoring. For example, if a land manager found a problem as a result of good monitoring practices, that person could be rewarded for recognizing the problem and for implementing a timely remedy. This would make effective monitoring something positive rather than a chore. Another suggestion is a reciprocal agreement among land managers to oversee monitoring activities on the other's lands. This way, the person overseeing the monitoring does not have a vested interest in the outcome. Monitoring needs to be embraced by "ontheground" land managers. Removing barriers and replacing them with a support system would encourage this. For example, the U.S. Forest Service has instituted a position to coordinate all monitoring and evaluation activities for the agency.
Another important aspect of support systems is partnerships. Federal agencies need to build partnerships to support monitoring activities. An outgrowth of such partnerships could be an interagency monitoring expertise team that would serve as a resource, establish common methodologies, and foster good QA/QC practices (which are vital for litigation purposes). Such partnerships would encourage consistent monitoring and datahandling practices for adjoining public lands managed by different agencies, provide a mechanism to share monitoring expertise among agencies, and avoid duplication of effort among agencies. An extension of this partnership approach is monitoring partnerships between federal agencies, state and local governments, and private organizations. There are several advantages to coordination at this broad level, probably the most significant of which is consistent monitoring practices across ecosystems and habitats.
Consistency in data handling and preservation of data, information, and in some cases samples, are essential to institutionalizing effective monitoring practices. Plans must be implemented to ensure longterm maintenance and preservation of monitoring data. The archival method for monitoring information should not be manager or agencyspecific to maximize its usefulness for being shared among different groups or compared later with more recent information. In many cases it may be necessary to preserve maps and permanent ground markers. Funds should be earmarked for the proper archival of monitoring information.
Public acceptance of monitoring efforts is essential from two points of view. First, the public needs to understand the importance of monitoring and how the public may personally impact their lands. Secondly, the public needs to understand how management decisions and landmanagement policy are often based on the outcome of monitoring efforts. To achieve public acceptance of monitoring, land-management organizations could institute lowcost methods to get public participation in monitoring efforts. This could range from voluntary participation in monitoring activities to a simple billboard explaining a particular monitoring endeavor to an interactive exhibit. Getting schools, scout troops, and other civic organizations involved in monitoring may be a viable option. Direct involvement of the public in monitoring will hasten acceptance of the importance of monitoring and heighten awareness of the public impact on their lands. Once the public has grown to accept monitoring activities, they will expect them ("if the public accepts, then the public expects"). This will provide pressure to continue monitoring efforts and hence help to institutionalize it into the system. A mascot, something akin to "Smoky Bear," may be useful to enhance public awareness about monitoring. For example, the use of "Watchful Eagle Eye" or "Monitor Lizard" as a symbol for monitoring could be combined with catchy slogans to capture public interest. This could be used as a learning tool about the aspects and importance of monitoring.
Monitoring and evaluation are essential components for taking an ecological approach to management of natural resources. Resource monitoring has become an increasingly important subject as management of public lands has become increasingly complex. Increasing demand for resources, greater public involvement in management, issues of species population viability, and ecosystem have all contributed to a need for a better understanding of the land, and how it changes over time. Repeated observations over time, properly designed, can separate natural effects from human ones, and distinguish effective management practices from less effective or harmful ones. Clearly, the ability to gather this type of information is at the core of land stewardship and ecosystem management.
Carefully defining objectives, and then carefully matching methods to meet them, can mean the difference between an effective monitoring program and a waste of time and money. Monitoring for management purposes provides an opportunity for the scientist to collaborate with the resource manager to develop an effective monitoring system.
Monitoring is the measurement through time that indicates the movement toward the objective or away from it. A monitoring program should not be designed without clearly knowing how the data and information will be evaluated and put to use. Besides the need to do it as part and parcel of sound ecosystem management, there are various legislative and administrative mandates requiring monitoring and evaluation. Monitoring data that are collected using the best scientific knowledge, have known precision, are of highest quality, and are as objective as possible will be viewed as most credible and will help to recoup, maintain, or enhance public trust of natural resource management agencies. Managers need to use the correct science and technology for the questions to be answered. The process that one follows for monitoring and evaluation can be viewed as an adaptive management process: set objectives, monitor, evaluate, and make decisions.
A successful monitoring effort begins with clearly stating the purposes for which you are monitoring. A method of evaluation should be developed while setting the objectives. If the objectives cannot be analyzed or evaluated, managers cannot make wise, informed decisions. Time, expertise, and money are limited. Thus, a realistic monitoring system must be developed. Monitoring objectives oftentimes relate to the major threats and issues managers must deal with in resource management.
Approaches to monitoring and evaluation are myriad. The scope of the project has bearing on the approach to take. There is a continuum of designs ranging from pure single function efforts (e.g., timber) to linked-functional efforts (e.g., soil and water) to truly integrated efforts where data of all types are collected at the same locations using the same sampling design. Among the many considerations that go into the design of an effective monitoring program is the determination of the appropriate spatial and temporal scales with which to apply monitoring activities. These spatial and temporal scales are a function of the natural variability found in the system. A well designed sampling scheme allows the most amount of information to be developed at the least cost and the data quality are generally much higher because we are forced to systematize the data collection techniques.
From the information needs, one can identify possible attributes or indicators. The first implementation step is to conduct a search for existing data that may be useful for your purposes. Then one can address the monitoring program design; that is, how will the data be collected, including sample frames, methods, reporting protocols, etc. The first step in any sampling design is to define the population of interest, which is typically the study area. Before the sampling design and sample sizes can be determined, the sampling methods and sampling unit (plot) design must be specified. The sampling design specifies how the sampling units are selected. Many possible sampling designs exist, but simplicity is important, particularly for longterm monitoring. Recommended approaches include systematic sampling, simple random sampling, and stratified random sampling. The sample size is computed to achieve the desired precision level or the specified cost. The objective for most monitoring efforts is to be able either to detect a change or to determine whether the current value is within some specified range. This is why the precision objectives are stated as confidence intervals. After sampling decisions have been made, data collection and field work must be planned. Training is an important but often overlooked aspect of data collection.
One of the weak links in ecosystem monitoring is communication. An ecosystem may be under the jurisdiction of many State, private or Federal agencies, each having its own goal or objective. A forum could be developed where these different groups can sit down and discuss a general overall management strategy which can be used to meet shared objectives.
In developing a quality control program for monitoring and evaluation, there are several questions we need to ask ourselves. First, who should be involved in quality control? The public needs to accept the quality control methods; without public support, no matter how scientifically sound the program, the project at best will have limited success. Second, what should be addressed in quality control? As part of a quality control program, a commitment must be made by researchers and other technical persons to maintain the collection and management of quality data sets over time. Only by maintaining consistency in the collection, analysis, and management of longterm datasets can we accurately detect trends in ecosystem conditions. Third, when should quality control measures be developed? Quality control measures should be developed from the very beginning along with your monitoring plan and evaluation process.
Sound data management practices are the key to successful quality assurance and a credible monitoring program. The major objectives of data management are to ensure that data are stored and transferred accurately and secured from loss or damage. To accomplish those objectives, sound data management must include a number of key steps: data entry, data verification, data validation, data documentation, and data archival.
Proper analysis of monitoring data should result in analyses that ensure a priori questions regarding the condition of the resources being monitored are effectively answered; that ensure the early detection of environmental degradation, thereby allowing for lowercost solutions to environmental problems; that contribute to the knowledge of the system being monitored and how it is impacted by human activities; and that provide land managers with scientific rationale for setting environmental quality standards.
Evaluation is the process of converting monitoring data into information and then into knowledge. It is a valueadded process that provides managers with what they need to make sound decisions. While it occurs after the actual data are collected and analyzed, the evaluation process should have been determined at the outset when objectives were being set. It forces discipline in monitoring when one must specify in some detail how the data will be used. Evaluation leads to decisions about the future management of the study area.
Institutionalizing monitoring and evaluation into landmanagement practices is key. Monitoring must be endorsed both from the top down and from the bottom up in a given organization in order to be effective. Federal agencies need to build partnerships to support monitoring activities. Consistency in data handling and preservation of data, information, and in some cases samples, are essential to institutionalizing effective monitoring practices. Plans must be implemented to ensure longterm maintenance and preservation of monitoring data. Public acceptance of monitoring efforts is essential from two points of view. First, the public needs to understand the importance of monitoring in determining how the public actually impacts their lands. Secondly, the public needs to understand how management decisions and landmanagement policy are often based on the outcome of monitoring and evaluation efforts.
Chambers and Brown. 1983.
Corn and Bury. 1990.
Cochran, W.G. 1977. Sampling techniques. John Wiley and Sons, New York. 428 p.
Cohen, J. 1988. Statistical power analysis for the behavioral sciences, 2nd ed. Lawrence Erlbaum Associates. Hillsdale, New Jersey. 567 p.
Ecological Society of America. 1996. The scientific basis for ecosystem management: an assessment by the Ecological Society of America ad hoc committee on ecosystem management. Ecological Applications 6(3):xxxxxx.
Espy, Mike, and Babbitt, Bruce. 1994. Amendments to Forest Service and Bureau of Land Management planning documents within the range of the northern spotted owl environmental impact statement and record of decision, xyz pages and 74 p. + appendices, respectively.
Fisher, R.A. 1954. Statistical methods for research workers. Oliver and Boyd. Edinburgh, Scotland.
Forest Ecosystem Management Assessment Team. 1993. Forest ecosystem management: an ecological, economic and social assessment. Multiagency report.
Gippert, Michael J. 1990. Legal Requirements and Current Legal Issues in Monitoring. In: On Monitoring Forest Plan Implementation. Proceedings of the Symposium, Minneapolis, Minnesota, May 1417, 1990. pp. 1125.
Heyer et al. 1994
Holling, C.S. 1978. Adaptive environmental assessment and management. John Wiley & Sons, NY.
Howe et al. 1994.
National Academy of Sciences. 1990. Managing troubled waters. The role of marine environmental monitoring. Natural Research Council Committee on a Systems Assessment of Marine Environmental Pollution Monitoring. National Academy Press. Washington, DC.
National Park Service. 1988. Natural resources assessment and action program report. Natural Resources Program. Washington, DC. 70 p.
National Science Foundation. 1982. Data management at biological field stations. Division of Biotic Systems ad Resources, Biological Research Resources Program. Washington, DC.
Noss and Cooperrider. 1994.
Tessler, S. 1995. Inventory and monitoring program instructions for data handling. Shenandoah National Park. 31 p.
Tyrrell, L.E., Funk, D.T., Scott, C.T., Smith, M.L., Parker, L., DeMeo, T.E., Brakke, M., and Shimp, B. 1996. Options for ecosystem monitoring: planning and field methods, Volume I: overview and planning guide. USDA Forest Service, North Central Forest Experiment Station, St. Paul, MN (unpublished manuscript).
USDA Forest Service. 1987. Forest Service Manual 1920 and Forest Service Handbook 1909.12. Washington DC.
U.S. Executive Office of the President. 1994. Coordinating geographic data acquisition and access: the national spatial data infrastructure. Executive Order 12906, Executive Office of the President, Washington, DC. (also available from ftp://fgdc.er.usgs.gov/pub/).
U.S. Forest Service Staff. 1990. Resource Inventory Handbook. Forest Service Handbook 1909.14, amendment 1. USDA, Forest Service, Washington, DC.
_________1993, The Principal Laws Relating to Forest Service Activities. United States Dept. of Agriculture, U.S. Government Printing Office, Washington, DC, 1163 p.
Walters, C.J. 1986. Adaptive management of renewable resources. MacMillian, NY