Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Calibrating conservation: new tools for measuring success

Calibrating conservation: new tools for measuring success The challenge In the past 3 decades, enormous effort has been devoted to securing the future of the world's species and ecosystems through a wide variety of conservation programs and projects. However, resources for conservation are still inadequate, given the scale of the problem ( James 1999 ), so it is vital to ensure those that are available are used as efficiently as possible. Evaluating the success of conservation efforts and identifying the most effective approaches are therefore important challenges facing conservationists, policy makers, and donors alike ( Salafsky 2002 ; Christensen 2003 ; Sutherland 2004 ; Ferraro & Pattanayak 2006 ). While experimental or quasi‐experimental approaches to program design and evaluation can help to improve the evidence on which decisions about future action are based ( Green 2002 ; Sutherland 2004 ; Ferraro & Pattanayak 2006 ; Sitati & Walpole 2006 ), tools are needed to help evaluate and analyze the effectiveness of conservation practice when these are not politically, logistically, or scientifically practical to (i.e., in the majority of instances; ( Green 2002 ). Measuring success and establishing what works require a change in the focus of practitioners' efforts to measure and report on the progress and achievements of conservation interventions. In recent years, there has been a healthy shift from reporting about projects and their component interventions in terms of inputs (such as money and time spent), to tracking and reporting measures of implementation and outputs (the activities completed and their concrete, countable products). However, further progress is needed. In particular, tools are required that can help assess an intervention's outcomes (how it affects the conservation problem of interest) and to link these to assessment of its conservation effect (project‐scale changes in the conservation status of target ecosystems, habitats, species, or populations). The Cambridge Conservation Forum (CCF: http://www.cambridgeconservationforum.org.uk )—a consortium of 36 global to local, statutory to nongovernmental conservation organizations based in and around Cambridge, UK—has used a participatory process to explore the issue of evaluating conservation success and has worked collaboratively toward developing some solutions. In this review, we set out some of the common constraints in evaluating the success of conservation projects and report recent progress in developing tools for overcoming them. Common constraints The CCF discussions have shown that there are as yet few examples of standardized approaches to evaluating conservation outcomes and effects, even within a single organization. This is because of the following fundamental problems. Conservation objectives are often not clear The degree to which explicitly stated objectives exist in conservation projects is rather variable, as is the clarity with which the assumptions linking actions to outcomes are stated. Explicit statements of conservation objectives in the context of a detailed problem analysis make it easier to identify what should be monitored to gauge the success of any particular action. The existence of clear overall organizational goals and strategies increases the likelihood that individual projects will have objectives that are set and phrased in a manner that makes it possible to assess the role of individual actions in achieving them. Information management is often problematic In our experience, standardized or formalized management of data related to project objectives and outcomes is rare in conservation organizations. Although databases of active projects exist, they tend to serve basic financial and operational management purposes more than knowledge management or organizational learning functions. As a result, key variables are often not recorded, and data related to past projects and their effectiveness are rarely included. Individual projects are documented and reports are filed as required by donors, but the information contained in those reports is not easily accessed. Furthermore, information on outcomes and effects is generally narrative in nature, though it is often based on quantitative or semi‐quantitative data as well as perceptions of project personnel. Much relevant information remains as undocumented experiences of individual staff members. Even when data are gathered and documented, they often remain in field offices in relatively inaccessible form. Conservation impacts mostly occur outside project time frames Conservation is a long‐term process and the effects of interventions on target populations or habitats generally develop over a protracted period. This means that many project outcomes and their conservation effects only become measurable well beyond the time frame of the usual project cycle. While growing efforts in biological monitoring are providing the basis for assessing changes in conservation status, such changes are often apparent only over the long term and may be influenced by complex interactions among many different factors and interventions. Hence there is also a need for tools that measure threat reduction ( Salafsky & Margoluis 1999 ) and other intermediate outcomes, so that more and less successful approaches can be distinguished over shorter time scales and interventions can be managed adaptively ( Salafsky 2002 ; CMP 2004 ; Stem 2005 ). Measuring effectiveness uses scarce conservation resources Because conservation effects generally become manifest beyond the normal project cycle, it is also often difficult to identify financial and human resources to support their measurement. Even within the lifetime of a project, measuring outcomes is often limited by reluctance to divert financial resources from implementation, and because such measurement is seen as a luxury it becomes an obvious target for budget cuts. This problem is often exacerbated by project emphases and donor priorities. Most projects respond to donor priorities by emphasizing particular kinds of objectives over others. In some cases, conservation objectives may be given less documentary (and financial) emphasis than human development objectives, or they may not be made explicit at all. In other instances, conservation of particular taxa may be emphasized over conserving habitats or less charismatic taxa. Because resources for project monitoring and evaluation tend to be focused on explicitly stated objectives, the resources available can be inadequate for measuring the full range of conservation outcomes and effects. Incentives and motivation for evaluation are limited Neither conservation organizations nor donors have so far created a culture in which critical evaluation of outcomes is seen as desirable in its own right. Both individual and institutional concerns about exposing shortcomings have served as strong disincentives for critical evaluation and sharing of experience ( Redford & Taber 2000 ). Given these challenges, what progress is being made? Recent progress Part of the solution to improving conservation practice and associated monitoring is through the adoption and application of standards of good practice, such as the Open Standards for Conservation Practice formulated by the Conservation Measures Partnership ( CMP 2004 ), or the framework for assessing management effectiveness of protected areas ( Hockings 2006 ). Assessing the adherence to such standards enables organizations to identify projects working according to best practice, to improve those that are not, and to track changes ( O'Neil 2007 ; Stolton 2008 ). However, by themselves the application of such standards and assessment against them do not ensure positive conservation outcomes or assess whether they are being achieved. Nor do they provide a basis for systematizing or analyzing conservation experience. Relatively few attempts have been made to evaluate the success of a large sample of conservation interventions. In one case, a consortium of UK organizations developed a scoring system to evaluate agri‐environment scheme agreements between farmers and the government department responsible. A multi‐disciplinary team carried out an appraisal of agreements against their different objectives ( Carey 2003, 2005 ). This study was only able to look at the early stages of projects and therefore predicted outcomes and impacts rather than measuring them. In another case, the Zoo Measures Group developed an approach for assessing the impacts of zoo‐funded projects as the combination of the importance of their conservation targets, the volume or scale of the potential effect on the targets, and the effect of the project ( Mace 2007 ). While this is a promising approach, the effect portion of the method remains something of a “black box,” which has proved difficult to implement consistently ( Walter 2005 ). The need remains for tools that can help in consistent evaluation of the effectiveness of a wide range of conservation actions. To address this patchy progress in overcoming the constraints identified above, CCF has collectively developed a conceptual framework and a practical scorecard for evaluating major categories of conservation activity. The CCF framework The framework developed by CCF defines successful conservation as “increasing the likelihood of persistence of native ecosystems, habitats, species, and/or populations in the wild (without adverse effects on human well‐being).” It then applies a stepwise model of how projects proceed, from inputs and implementation through outcomes to conservation effect, recognizing that conservation projects and programs characteristically involve several different types of activity, which will each have different appropriate measures of their outcomes. Based on existing categorizations of conservation action ( Salafsky 2002, 2008 ; IUCN‐CMP 2006 ) and as a result of broad consultation within CCF, the framework explicitly considers seven broad categories of conservation activity ( Table 1 ) that together encompass most of the work that CCF members and other conservation organizations undertake. These types of action split roughly into two groups: those involving direct management of conservation targets (species or sites), and those that influence conservation status indirectly (through policy, development of enhanced or alternative livelihoods, capacity‐building, education, or research; Figure 1 ). Continued use of these categories within and outside CCF has confirmed that nearly all conservation actions can be accommodated within them. For example, efforts to deal with markets typically involve some combination of addressing livelihoods of producers, developing and implementing policy, and raising awareness among consumers and/or producers, while fundraising is one aspect of building organizational capacity. 1 The seven broad categories of conservation activity adopted by CCF as the basis for its evaluation framework. Together these encompass most of the work that CCF members and other conservation organisations undertake Conservation activity type Definition Managing species and populations Actions directly involving species themselves, such as clutch management, captive breeding, etc. Managing sites, habitats, landscapes, and ecosystems Actions directly manipulating or managing a particular site Developing, adopting or implementing policy or legislation Actions to establish frameworks within the processes of government, civil society or the private sector that make conservation goals official or facilitate their accomplishment; may include development, implementation and/or enforcement of legislation, management plans, sectoral policies, trade regulations, among others. Enhancing, and/or providing alternative, livelihoods Actions to improve the well‐being of people having impacts on the species/habitats of conservation interest, including through sustainable resource management, income‐generating activities, conservation enterprise, direct incentives. Training and capacity‐building Actions to enhance specific skills among those directly involved in conservation, includes both building individual skills and improving the many components of organizational capacity Education and awareness‐raising Actions directed at improving understanding and influencing behavior among people not necessarily directly involved in conservation action. Covers all forms of communication, including campaigns, lobbying, educational and publicity/awareness programs, and production of materials. Research and conservation planning Actions aimed at improving the information base on which conservation decisions are made, including survey, inventory, monitoring, remote sensing, mapping, development of new technologies 1 A simple representation of the relationship between different types of conservation activity and conservation effect. Because they work directly with conservation targets, species and site management are conceptually closer to conservation effect than the other categories of conservation action. Research in particular is likely to affect conservation status only through its influence on other types of conservation action. The CCF framework addresses each of these activity types in turn through a conceptual model of the likely relationships between the implementation of the activity and its conservation effect, making explicit the linkages that are often assumed. These generic “results chains” ( Salafsky 2001 ) provide a framework for assessing the outcomes and effectiveness of conservation projects, and in many cases for planning both projects and the associated monitoring required to track their outcomes. As shown in Figure 2 (and Supporting Information [ Appendix S1]), the models differ in complexity between those activity types most directly linked to effects on conservation targets and the five categories of conservation activity that are less directly linked to conservation effect. 2 Examples of the conceptual models that form the framework for evaluating the success of conservation actions. The components of the flow charts broadly correspond to implementation, outcomes and effects (separated by horizontal lines). The hexagons show indicative (but not exhaustive) links to other activity types. (A) The model for Species and Site Management, which are directly analogous and have the simplest structure, with implementation of management actions leading directly to reduction of threats to the conservation target and/or an improvement in its responses to those threats (outcomes), and a consequent conservation effect. (B) For livelihoods‐related projects, as for the other activity types on the outer ring of Figure 1 , the links between implementation and conservation effect are more complex, involving more intermediate outcomes (see text for more detail). Each of these more complex models includes a "key outcome" (marked with a star) on which threat reduction and/or improvement of responses depends. The remaining models show analogous sequences, and are provided along with detailed explanations in the Supporting Information (Appendix S1). Thus the implementation of species or site management tends to act directly on the threats to a conservation target ( Salafsky & Margoluis 1999 ) and/or on the ability of the target to resist or respond to those threats ( Figure 2A ). These outcomes—reduced threats and/or improved ability to respond to threat—then in turn produce a conservation effect—project‐scale changes in the conservation status of the target ecosystems, habitats, species, or populations. In contrast, a more indirect conservation activity such as enhancing livelihoods and/or providing alternatives can achieve conservation effect only through a more complex series of steps ( Figure 2B ). There are three broad routes: developing sustainable management regimes for important natural resources, encouraging the development of alternative sources of income, and providing direct incentives. In all three cases, project implementation involves promoting and building support for these practices, which in turn can lead to a sequential series of outcomes. First, the social and economic circumstances necessary for the adoption of the practices may be developed. Following this, the practices themselves may be taken up (something we call a “key outcome”—see below). And finally, if this happens and these new practices have been appropriately designed, this may lead to reduction in threats and/ or improved responses of the conservation target, which again may lead to a conservation effect. These changes may or may not be accompanied by improvements in people's livelihoods and/or attitudes, but these will not impact conservation status, excepting by affecting the targeted behaviors or practices. For each of the activity types conceptually more remote from effects on conservation targets ( Figure 1 ), the chain of links between implementation and conservation effect is longer than for species and site management. However, all include “key outcomes” that occur earlier and provide the platform for reducing threats to and/or improving the responses of conservation targets (Supporting Information [Appendix S1]). For livelihoods‐related projects the key outcome is the abandonment of damaging practices by the target group ( Figure 2B ). For policy work, it is the implementation of the policies or legislation promoted; for education and awareness raising, a change in behavior by the audience ultimately targeted by the work; for capacity‐building, increases in the quantity and/or quality of conservation action; and for research, the application of research results to conservation practice. The CCF evaluation tool Building on this framework, CCF has developed a questionnaire‐based scorecard (Supporting Information [Appendix S2]) designed to help practitioners assess the outcomes and conservation effect of different sorts of projects in a systematic and consistent manner. For each activity type, there is a single questionnaire comprising carefully worded questions, which work progressively through implementation and outputs to outcomes and conservation effect. Each question offers four ordered answers reflecting increasing levels of achievement. For example, the question about the key outcome in the questionnaire on policy‐related activities is: “Is the new policy instrument or element being implemented?” and the answers offered are: a. No, not at all b. Yes, to a limited degree (limited speed, extent, or intensity of implementation) c. Yes, largely d. Yes, fully. The question on conservation effect for the same activity type is: “Has implementation of policy or legal changes brought about by the project changed the probability of persistence of the sites, habitats/landscapes or species of concern since the project began?” The answers offered are: a. Probability has decreased b. There has been no change in probability c. Probability of persistence has increased somewhat; declines have been reduced or stabilized relative to trends anticipated without project; d. Probability has increased greatly; declines have been reversed and recovery is occurring. There are further options for when insufficient information is available. The questionnaire also requires that the project‐specific meaning of the answer and the evidence on which it is selected are made explicit, in an adjacent text box. This box provides both for elaboration of the evidence on which an answer is based and for categorization of that evidence as (i) opinion, (ii) supported opinion, or (iii) hard evidence. A section for background information on project development and process is also included for each activity type and for the project overall. Careful attention has been paid to ensuring consistency across the questionnaires associated with the different activity types in both the questions and the responses offered. There are consistent options for addressing lack of information, and consistent rules about where in the process these kinds of answers are acceptable. For example, it is essential to answer substantively all questions about implementation, while some lack of information about later outcomes and conservation effect is to be expected, especially for projects initiated relatively recently and for those activity categories less directly linked to conservation effect ( Figure 1 ). The questionnaire is sufficiently generic that the instances in which a question is not applicable are very limited, and this answer always includes a meaning specific to the question. The questionnaires have been refined in the light of several different types of testing. On several occasions, we tested consistency of interpretation by exposing assessors who did not previously know about a project to details about it and its outcomes through a presentation and some limited questioning. Participants then independently filled in the relevant questionnaire. Analysis of their answers made it possible to identify questions that were not interpreted consistently in the face of a constant information base and to improve them accordingly. Trial use of the questionnaire by individual project leaders further identified where wording and concepts needed to be clarified. Moving forward The Cambridge Conservation Forum framework and evaluation tool described here represent a consensus about the generic connections between actions and effects across the full breadth of conservation activity and experience in a wide range of organizations. The classification of activities, conceptual models, and associated scorecard were selected and further developed from a wide range of possible approaches through an extensive process of collaboration and discussion among the diverse membership of the CCF, involving more than 50 individuals from about 20 organizations. Moreover, the development process was a rigorous one, which included several iterations and several different approaches to testing and refining these tools with different groups of users. The focus on clarity of wording and internal consistency helps to ensure consistency of interpretation. These tools will help to address the major challenges identified earlier, especially because of their practicality and appeal to the conservation practitioners who are their principal users and because they have the potential to generate reliable and systematic information on conservation outcomes and effects. CCF's framework and evaluation tool have indeed been warmly welcomed by many organizations, both within and outside the Forum, as practical innovations that may help them to (1) evaluate and analyze existing conservation experience, (2) plan future activities and associated monitoring, and (3) identify steps or actions that could enhance their conservation impact. They are employed principally as a form of self‐assessment, as project personnel are most likely to have access to the evidence needed to respond to the questions. While this makes the process inherently subjective and open to concern about bias, the emphasis on the evidence base for the answers to the questionnaire means that the results of evaluations using these tools can be subject to external audit or validation if required (e.g., by funders). Users have found them straightforward to apply and have reported that the evaluation tool has in some cases helped to highlight key areas for monitoring that are essential for demonstrating particular outcomes, many of which have not previously received attention. They have also commented enthusiastically about the usefulness of both the framework and the questionnaires for clarifying project objectives, for planning new work, and for prioritizing the application of scarce resources for the monitoring needed to evaluate it. The framework and evaluation tool developed by CCF build on and support much existing thinking about the importance of using conservation experience to inform future conservation actions ( Kleiman 2000 ; Salafsky 2002 ; Saterson 2004 ; Ferraro & Pattanayak 2006 ). Like planning‐based approaches such as the logical framework and results chains ( Salafsky 2001 ) and tools for implementing them, like Miradi ( Miradi 2007 ), the CCF tools help users to make explicit the assumed linkages between their activities and the desired conservation outcomes. Furthermore, the CCF tools help generate standardized information to support this process by providing generic models for most types of conservation activity. They also potentially complement process‐based assessments like conservation audits based on the CMP's Open Standards ( CMP 2004 ; O'Neil 2007 ). The CCF framework and evaluation tool are most closely related to other "results‐based" approaches to evaluation, such as threat reduction assessment ( Salafsky & Margoluis 1999 ) and the scoring systems adopted for agri‐environment scheme appraisals ( Carey 2003, 2005 ) and by the Zoo Measures Group ( Mace 2007 ). The principal advance achieved by the CCF tools is the introduction of a unified framework for managing information on existing conservation experience. This will in turn facilitate analysis and synthesis of this experience and support improvements to future conservation efforts. The CCF tools also complement the growing drive for Evidence Based Conservation ( Sutherland 2004 ; Sutherland 2005 ) by providing a means to assess the effects of whole projects as distinct from single management interventions. Crucially, the CCF framework and evaluation tool potentially provide a means to help ensure that the data needed for evidence‐based conservation are collected in future. By highlighting key conceptual linkages in existing work and asking about their results in standardized forms, the CCF framework, and questionnaire are potentially powerful tools for providing practical measures of success that can be used to address questions about conservation effectiveness in analytical fashion. To date, much evaluation has focused principally on measures of project implementation and countable outputs. Biological monitoring is limited by its cost and is rarely linked to evaluation. The CCF framework and evaluation tool help users to identify and assess intermediate or "key" outcomes that precede, and may be easier to measure than, changes in threat or biological status. Results from a preliminary trial of these tools ( Kapos in press ) suggest that key outcomes are useful predictors of changes to the probability of persistence of conservation targets (conservation effect), and that they perform this function far better than commonly reported measures of project implementation. Thus, the CCF tools can help practitioners to identify the likely impacts of their actions, despite constraints of time frames and scarce resources, even for projects still in progress and for interventions such as capacity building or policy‐related work where biological impacts are not commonly measured. Looking ahead, wider application of the CCF framework and especially the evaluation tool could help to provide substantive data on the outcomes and effects of conservation projects, which are generally hard to come by ( Kleiman 2000 ; Saterson 2004 ; Brooks 2006 ). While few organizations are good at openly declaring whole projects to be complete failures or sharing their failures widely ( Redford & Taber 2000 ), there is likely to be greater interest in examining projects by their component interventions to identify more and less successful approaches. Evaluation of activity types within complex projects and analysis of those evaluations may prove a useful mechanism for promoting documentation and sharing of unsuccessful experiences in conservation, as called for by Redford and Taber (2000) . Applying the CCF tools more widely and pooling the results would generate an invaluable dataset for promoting conservation learning and for testing hypotheses about predictors and determinants of conservation success both within and external to projects. For example, such a dataset could be used to explore whether similar factors determine the success of different activity types, to compare the success of different activity types in particular conditions or in different combinations, and to compare the conservation effects of different strategies for allocating resources. By combining the results of such evaluations with assessments of the importance of the conservation targets and the scale of the interventions, it may also be possible to extend the Zoo Measures Group work ( Mace 2007 ), and begin building a picture of overall conservation impacts. This should in turn provide further incentives for evaluating and sharing conservation experience. Conclusion We believe the CCF tools, which are now available for wider use ( http://www.cambridgeconservationforum.org.uk/measures_outputs.htm ), will help circumvent the shortage of experimental interventions and the reluctance of many organizations to share failures ( Redford & Taber 2000 ). They can help to overcome many of the constraints commonly associated with evaluation of conservation projects. Specifically, they help clarify conservation objectives, and provide a standardized framework that serves as a useful basis for assembling, managing, and using information about project outcomes and existing conservation experience. By identifying key outcomes that can predict conservation success and can be assessed in relatively short time frames, they help to make more efficient use of scarce monitoring and evaluation resources. With wide application, the CCF framework and evaluation tool can provide a powerful platform for drawing on the experience of past and ongoing conservation projects to identify quantitatively factors that contribute to conservation success. The capacity for rigorous analysis and synthesis should provide a strong incentive for evaluation to donors and practitioners alike. We encourage conservation practitioners to try these tools and thereby help build a systematic catalogue of conservation experience that can be used to establish ways of improving our performance. We believe that adopting a better measure of success is crucial to reducing the rate of conservation failure. Editor : Justin Brashares Acknowledgments A John D. and Catherine T. MacArthur Foundation grant to CCF ( http://www.cambridgeconservationforum.org.uk ) via the University of Cambridge supported this work. CCF member organizations supported the participation of their staff in the project. We thank the University of Cambridge Department of Zoology and UNEP‐WCMC for hosting project staff and meetings. The Conservation Measures Partnership, IUCN Programme Evaluation Group, Earthwatch, the BAT Biodiversity Partnership, and M Aminu‐Kano, M Ausden, E Ball, S Barnard, A Bowley, E Bowen‐Jones, R Brett, M Brooke, P Brotherton, P Buckley, N Bystriakova, D Coomes, B Cooper, B Dickson, J Doberski, E van Ek, J Ekstrom, L Evans, D Gibbons, M Green, R Green, A Grigg, D Hawkins, M Harris, P Herkenrath, A Hipkiss, G Hirons, R Hossain, F Hughes, J Hughes, J Hutton, C Ituarte, D Kingma, P Laird, A Lanjouw, P Lee, C Magin, T Milliken, R Mitchell, D Noble, S O'Connor, T Oldfield, K O'Regan, E Papworth, A Rodrigues, R Siregar, P Stromberg, B Sutherland, D Thomas, R Trevelyan, G Tucker, S Wells, J Williams, and M Wright provided helpful input. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Conservation Letters Wiley

Loading next page...
 
/lp/wiley/calibrating-conservation-new-tools-for-measuring-success-7lV7e8ssUi

References (33)

Publisher
Wiley
Copyright
"Copyright © 2008 Wiley Subscription Services, Inc., A Wiley Company"
eISSN
1755-263X
DOI
10.1111/j.1755-263X.2008.00025.x
Publisher site
See Article on Publisher Site

Abstract

The challenge In the past 3 decades, enormous effort has been devoted to securing the future of the world's species and ecosystems through a wide variety of conservation programs and projects. However, resources for conservation are still inadequate, given the scale of the problem ( James 1999 ), so it is vital to ensure those that are available are used as efficiently as possible. Evaluating the success of conservation efforts and identifying the most effective approaches are therefore important challenges facing conservationists, policy makers, and donors alike ( Salafsky 2002 ; Christensen 2003 ; Sutherland 2004 ; Ferraro & Pattanayak 2006 ). While experimental or quasi‐experimental approaches to program design and evaluation can help to improve the evidence on which decisions about future action are based ( Green 2002 ; Sutherland 2004 ; Ferraro & Pattanayak 2006 ; Sitati & Walpole 2006 ), tools are needed to help evaluate and analyze the effectiveness of conservation practice when these are not politically, logistically, or scientifically practical to (i.e., in the majority of instances; ( Green 2002 ). Measuring success and establishing what works require a change in the focus of practitioners' efforts to measure and report on the progress and achievements of conservation interventions. In recent years, there has been a healthy shift from reporting about projects and their component interventions in terms of inputs (such as money and time spent), to tracking and reporting measures of implementation and outputs (the activities completed and their concrete, countable products). However, further progress is needed. In particular, tools are required that can help assess an intervention's outcomes (how it affects the conservation problem of interest) and to link these to assessment of its conservation effect (project‐scale changes in the conservation status of target ecosystems, habitats, species, or populations). The Cambridge Conservation Forum (CCF: http://www.cambridgeconservationforum.org.uk )—a consortium of 36 global to local, statutory to nongovernmental conservation organizations based in and around Cambridge, UK—has used a participatory process to explore the issue of evaluating conservation success and has worked collaboratively toward developing some solutions. In this review, we set out some of the common constraints in evaluating the success of conservation projects and report recent progress in developing tools for overcoming them. Common constraints The CCF discussions have shown that there are as yet few examples of standardized approaches to evaluating conservation outcomes and effects, even within a single organization. This is because of the following fundamental problems. Conservation objectives are often not clear The degree to which explicitly stated objectives exist in conservation projects is rather variable, as is the clarity with which the assumptions linking actions to outcomes are stated. Explicit statements of conservation objectives in the context of a detailed problem analysis make it easier to identify what should be monitored to gauge the success of any particular action. The existence of clear overall organizational goals and strategies increases the likelihood that individual projects will have objectives that are set and phrased in a manner that makes it possible to assess the role of individual actions in achieving them. Information management is often problematic In our experience, standardized or formalized management of data related to project objectives and outcomes is rare in conservation organizations. Although databases of active projects exist, they tend to serve basic financial and operational management purposes more than knowledge management or organizational learning functions. As a result, key variables are often not recorded, and data related to past projects and their effectiveness are rarely included. Individual projects are documented and reports are filed as required by donors, but the information contained in those reports is not easily accessed. Furthermore, information on outcomes and effects is generally narrative in nature, though it is often based on quantitative or semi‐quantitative data as well as perceptions of project personnel. Much relevant information remains as undocumented experiences of individual staff members. Even when data are gathered and documented, they often remain in field offices in relatively inaccessible form. Conservation impacts mostly occur outside project time frames Conservation is a long‐term process and the effects of interventions on target populations or habitats generally develop over a protracted period. This means that many project outcomes and their conservation effects only become measurable well beyond the time frame of the usual project cycle. While growing efforts in biological monitoring are providing the basis for assessing changes in conservation status, such changes are often apparent only over the long term and may be influenced by complex interactions among many different factors and interventions. Hence there is also a need for tools that measure threat reduction ( Salafsky & Margoluis 1999 ) and other intermediate outcomes, so that more and less successful approaches can be distinguished over shorter time scales and interventions can be managed adaptively ( Salafsky 2002 ; CMP 2004 ; Stem 2005 ). Measuring effectiveness uses scarce conservation resources Because conservation effects generally become manifest beyond the normal project cycle, it is also often difficult to identify financial and human resources to support their measurement. Even within the lifetime of a project, measuring outcomes is often limited by reluctance to divert financial resources from implementation, and because such measurement is seen as a luxury it becomes an obvious target for budget cuts. This problem is often exacerbated by project emphases and donor priorities. Most projects respond to donor priorities by emphasizing particular kinds of objectives over others. In some cases, conservation objectives may be given less documentary (and financial) emphasis than human development objectives, or they may not be made explicit at all. In other instances, conservation of particular taxa may be emphasized over conserving habitats or less charismatic taxa. Because resources for project monitoring and evaluation tend to be focused on explicitly stated objectives, the resources available can be inadequate for measuring the full range of conservation outcomes and effects. Incentives and motivation for evaluation are limited Neither conservation organizations nor donors have so far created a culture in which critical evaluation of outcomes is seen as desirable in its own right. Both individual and institutional concerns about exposing shortcomings have served as strong disincentives for critical evaluation and sharing of experience ( Redford & Taber 2000 ). Given these challenges, what progress is being made? Recent progress Part of the solution to improving conservation practice and associated monitoring is through the adoption and application of standards of good practice, such as the Open Standards for Conservation Practice formulated by the Conservation Measures Partnership ( CMP 2004 ), or the framework for assessing management effectiveness of protected areas ( Hockings 2006 ). Assessing the adherence to such standards enables organizations to identify projects working according to best practice, to improve those that are not, and to track changes ( O'Neil 2007 ; Stolton 2008 ). However, by themselves the application of such standards and assessment against them do not ensure positive conservation outcomes or assess whether they are being achieved. Nor do they provide a basis for systematizing or analyzing conservation experience. Relatively few attempts have been made to evaluate the success of a large sample of conservation interventions. In one case, a consortium of UK organizations developed a scoring system to evaluate agri‐environment scheme agreements between farmers and the government department responsible. A multi‐disciplinary team carried out an appraisal of agreements against their different objectives ( Carey 2003, 2005 ). This study was only able to look at the early stages of projects and therefore predicted outcomes and impacts rather than measuring them. In another case, the Zoo Measures Group developed an approach for assessing the impacts of zoo‐funded projects as the combination of the importance of their conservation targets, the volume or scale of the potential effect on the targets, and the effect of the project ( Mace 2007 ). While this is a promising approach, the effect portion of the method remains something of a “black box,” which has proved difficult to implement consistently ( Walter 2005 ). The need remains for tools that can help in consistent evaluation of the effectiveness of a wide range of conservation actions. To address this patchy progress in overcoming the constraints identified above, CCF has collectively developed a conceptual framework and a practical scorecard for evaluating major categories of conservation activity. The CCF framework The framework developed by CCF defines successful conservation as “increasing the likelihood of persistence of native ecosystems, habitats, species, and/or populations in the wild (without adverse effects on human well‐being).” It then applies a stepwise model of how projects proceed, from inputs and implementation through outcomes to conservation effect, recognizing that conservation projects and programs characteristically involve several different types of activity, which will each have different appropriate measures of their outcomes. Based on existing categorizations of conservation action ( Salafsky 2002, 2008 ; IUCN‐CMP 2006 ) and as a result of broad consultation within CCF, the framework explicitly considers seven broad categories of conservation activity ( Table 1 ) that together encompass most of the work that CCF members and other conservation organizations undertake. These types of action split roughly into two groups: those involving direct management of conservation targets (species or sites), and those that influence conservation status indirectly (through policy, development of enhanced or alternative livelihoods, capacity‐building, education, or research; Figure 1 ). Continued use of these categories within and outside CCF has confirmed that nearly all conservation actions can be accommodated within them. For example, efforts to deal with markets typically involve some combination of addressing livelihoods of producers, developing and implementing policy, and raising awareness among consumers and/or producers, while fundraising is one aspect of building organizational capacity. 1 The seven broad categories of conservation activity adopted by CCF as the basis for its evaluation framework. Together these encompass most of the work that CCF members and other conservation organisations undertake Conservation activity type Definition Managing species and populations Actions directly involving species themselves, such as clutch management, captive breeding, etc. Managing sites, habitats, landscapes, and ecosystems Actions directly manipulating or managing a particular site Developing, adopting or implementing policy or legislation Actions to establish frameworks within the processes of government, civil society or the private sector that make conservation goals official or facilitate their accomplishment; may include development, implementation and/or enforcement of legislation, management plans, sectoral policies, trade regulations, among others. Enhancing, and/or providing alternative, livelihoods Actions to improve the well‐being of people having impacts on the species/habitats of conservation interest, including through sustainable resource management, income‐generating activities, conservation enterprise, direct incentives. Training and capacity‐building Actions to enhance specific skills among those directly involved in conservation, includes both building individual skills and improving the many components of organizational capacity Education and awareness‐raising Actions directed at improving understanding and influencing behavior among people not necessarily directly involved in conservation action. Covers all forms of communication, including campaigns, lobbying, educational and publicity/awareness programs, and production of materials. Research and conservation planning Actions aimed at improving the information base on which conservation decisions are made, including survey, inventory, monitoring, remote sensing, mapping, development of new technologies 1 A simple representation of the relationship between different types of conservation activity and conservation effect. Because they work directly with conservation targets, species and site management are conceptually closer to conservation effect than the other categories of conservation action. Research in particular is likely to affect conservation status only through its influence on other types of conservation action. The CCF framework addresses each of these activity types in turn through a conceptual model of the likely relationships between the implementation of the activity and its conservation effect, making explicit the linkages that are often assumed. These generic “results chains” ( Salafsky 2001 ) provide a framework for assessing the outcomes and effectiveness of conservation projects, and in many cases for planning both projects and the associated monitoring required to track their outcomes. As shown in Figure 2 (and Supporting Information [ Appendix S1]), the models differ in complexity between those activity types most directly linked to effects on conservation targets and the five categories of conservation activity that are less directly linked to conservation effect. 2 Examples of the conceptual models that form the framework for evaluating the success of conservation actions. The components of the flow charts broadly correspond to implementation, outcomes and effects (separated by horizontal lines). The hexagons show indicative (but not exhaustive) links to other activity types. (A) The model for Species and Site Management, which are directly analogous and have the simplest structure, with implementation of management actions leading directly to reduction of threats to the conservation target and/or an improvement in its responses to those threats (outcomes), and a consequent conservation effect. (B) For livelihoods‐related projects, as for the other activity types on the outer ring of Figure 1 , the links between implementation and conservation effect are more complex, involving more intermediate outcomes (see text for more detail). Each of these more complex models includes a "key outcome" (marked with a star) on which threat reduction and/or improvement of responses depends. The remaining models show analogous sequences, and are provided along with detailed explanations in the Supporting Information (Appendix S1). Thus the implementation of species or site management tends to act directly on the threats to a conservation target ( Salafsky & Margoluis 1999 ) and/or on the ability of the target to resist or respond to those threats ( Figure 2A ). These outcomes—reduced threats and/or improved ability to respond to threat—then in turn produce a conservation effect—project‐scale changes in the conservation status of the target ecosystems, habitats, species, or populations. In contrast, a more indirect conservation activity such as enhancing livelihoods and/or providing alternatives can achieve conservation effect only through a more complex series of steps ( Figure 2B ). There are three broad routes: developing sustainable management regimes for important natural resources, encouraging the development of alternative sources of income, and providing direct incentives. In all three cases, project implementation involves promoting and building support for these practices, which in turn can lead to a sequential series of outcomes. First, the social and economic circumstances necessary for the adoption of the practices may be developed. Following this, the practices themselves may be taken up (something we call a “key outcome”—see below). And finally, if this happens and these new practices have been appropriately designed, this may lead to reduction in threats and/ or improved responses of the conservation target, which again may lead to a conservation effect. These changes may or may not be accompanied by improvements in people's livelihoods and/or attitudes, but these will not impact conservation status, excepting by affecting the targeted behaviors or practices. For each of the activity types conceptually more remote from effects on conservation targets ( Figure 1 ), the chain of links between implementation and conservation effect is longer than for species and site management. However, all include “key outcomes” that occur earlier and provide the platform for reducing threats to and/or improving the responses of conservation targets (Supporting Information [Appendix S1]). For livelihoods‐related projects the key outcome is the abandonment of damaging practices by the target group ( Figure 2B ). For policy work, it is the implementation of the policies or legislation promoted; for education and awareness raising, a change in behavior by the audience ultimately targeted by the work; for capacity‐building, increases in the quantity and/or quality of conservation action; and for research, the application of research results to conservation practice. The CCF evaluation tool Building on this framework, CCF has developed a questionnaire‐based scorecard (Supporting Information [Appendix S2]) designed to help practitioners assess the outcomes and conservation effect of different sorts of projects in a systematic and consistent manner. For each activity type, there is a single questionnaire comprising carefully worded questions, which work progressively through implementation and outputs to outcomes and conservation effect. Each question offers four ordered answers reflecting increasing levels of achievement. For example, the question about the key outcome in the questionnaire on policy‐related activities is: “Is the new policy instrument or element being implemented?” and the answers offered are: a. No, not at all b. Yes, to a limited degree (limited speed, extent, or intensity of implementation) c. Yes, largely d. Yes, fully. The question on conservation effect for the same activity type is: “Has implementation of policy or legal changes brought about by the project changed the probability of persistence of the sites, habitats/landscapes or species of concern since the project began?” The answers offered are: a. Probability has decreased b. There has been no change in probability c. Probability of persistence has increased somewhat; declines have been reduced or stabilized relative to trends anticipated without project; d. Probability has increased greatly; declines have been reversed and recovery is occurring. There are further options for when insufficient information is available. The questionnaire also requires that the project‐specific meaning of the answer and the evidence on which it is selected are made explicit, in an adjacent text box. This box provides both for elaboration of the evidence on which an answer is based and for categorization of that evidence as (i) opinion, (ii) supported opinion, or (iii) hard evidence. A section for background information on project development and process is also included for each activity type and for the project overall. Careful attention has been paid to ensuring consistency across the questionnaires associated with the different activity types in both the questions and the responses offered. There are consistent options for addressing lack of information, and consistent rules about where in the process these kinds of answers are acceptable. For example, it is essential to answer substantively all questions about implementation, while some lack of information about later outcomes and conservation effect is to be expected, especially for projects initiated relatively recently and for those activity categories less directly linked to conservation effect ( Figure 1 ). The questionnaire is sufficiently generic that the instances in which a question is not applicable are very limited, and this answer always includes a meaning specific to the question. The questionnaires have been refined in the light of several different types of testing. On several occasions, we tested consistency of interpretation by exposing assessors who did not previously know about a project to details about it and its outcomes through a presentation and some limited questioning. Participants then independently filled in the relevant questionnaire. Analysis of their answers made it possible to identify questions that were not interpreted consistently in the face of a constant information base and to improve them accordingly. Trial use of the questionnaire by individual project leaders further identified where wording and concepts needed to be clarified. Moving forward The Cambridge Conservation Forum framework and evaluation tool described here represent a consensus about the generic connections between actions and effects across the full breadth of conservation activity and experience in a wide range of organizations. The classification of activities, conceptual models, and associated scorecard were selected and further developed from a wide range of possible approaches through an extensive process of collaboration and discussion among the diverse membership of the CCF, involving more than 50 individuals from about 20 organizations. Moreover, the development process was a rigorous one, which included several iterations and several different approaches to testing and refining these tools with different groups of users. The focus on clarity of wording and internal consistency helps to ensure consistency of interpretation. These tools will help to address the major challenges identified earlier, especially because of their practicality and appeal to the conservation practitioners who are their principal users and because they have the potential to generate reliable and systematic information on conservation outcomes and effects. CCF's framework and evaluation tool have indeed been warmly welcomed by many organizations, both within and outside the Forum, as practical innovations that may help them to (1) evaluate and analyze existing conservation experience, (2) plan future activities and associated monitoring, and (3) identify steps or actions that could enhance their conservation impact. They are employed principally as a form of self‐assessment, as project personnel are most likely to have access to the evidence needed to respond to the questions. While this makes the process inherently subjective and open to concern about bias, the emphasis on the evidence base for the answers to the questionnaire means that the results of evaluations using these tools can be subject to external audit or validation if required (e.g., by funders). Users have found them straightforward to apply and have reported that the evaluation tool has in some cases helped to highlight key areas for monitoring that are essential for demonstrating particular outcomes, many of which have not previously received attention. They have also commented enthusiastically about the usefulness of both the framework and the questionnaires for clarifying project objectives, for planning new work, and for prioritizing the application of scarce resources for the monitoring needed to evaluate it. The framework and evaluation tool developed by CCF build on and support much existing thinking about the importance of using conservation experience to inform future conservation actions ( Kleiman 2000 ; Salafsky 2002 ; Saterson 2004 ; Ferraro & Pattanayak 2006 ). Like planning‐based approaches such as the logical framework and results chains ( Salafsky 2001 ) and tools for implementing them, like Miradi ( Miradi 2007 ), the CCF tools help users to make explicit the assumed linkages between their activities and the desired conservation outcomes. Furthermore, the CCF tools help generate standardized information to support this process by providing generic models for most types of conservation activity. They also potentially complement process‐based assessments like conservation audits based on the CMP's Open Standards ( CMP 2004 ; O'Neil 2007 ). The CCF framework and evaluation tool are most closely related to other "results‐based" approaches to evaluation, such as threat reduction assessment ( Salafsky & Margoluis 1999 ) and the scoring systems adopted for agri‐environment scheme appraisals ( Carey 2003, 2005 ) and by the Zoo Measures Group ( Mace 2007 ). The principal advance achieved by the CCF tools is the introduction of a unified framework for managing information on existing conservation experience. This will in turn facilitate analysis and synthesis of this experience and support improvements to future conservation efforts. The CCF tools also complement the growing drive for Evidence Based Conservation ( Sutherland 2004 ; Sutherland 2005 ) by providing a means to assess the effects of whole projects as distinct from single management interventions. Crucially, the CCF framework and evaluation tool potentially provide a means to help ensure that the data needed for evidence‐based conservation are collected in future. By highlighting key conceptual linkages in existing work and asking about their results in standardized forms, the CCF framework, and questionnaire are potentially powerful tools for providing practical measures of success that can be used to address questions about conservation effectiveness in analytical fashion. To date, much evaluation has focused principally on measures of project implementation and countable outputs. Biological monitoring is limited by its cost and is rarely linked to evaluation. The CCF framework and evaluation tool help users to identify and assess intermediate or "key" outcomes that precede, and may be easier to measure than, changes in threat or biological status. Results from a preliminary trial of these tools ( Kapos in press ) suggest that key outcomes are useful predictors of changes to the probability of persistence of conservation targets (conservation effect), and that they perform this function far better than commonly reported measures of project implementation. Thus, the CCF tools can help practitioners to identify the likely impacts of their actions, despite constraints of time frames and scarce resources, even for projects still in progress and for interventions such as capacity building or policy‐related work where biological impacts are not commonly measured. Looking ahead, wider application of the CCF framework and especially the evaluation tool could help to provide substantive data on the outcomes and effects of conservation projects, which are generally hard to come by ( Kleiman 2000 ; Saterson 2004 ; Brooks 2006 ). While few organizations are good at openly declaring whole projects to be complete failures or sharing their failures widely ( Redford & Taber 2000 ), there is likely to be greater interest in examining projects by their component interventions to identify more and less successful approaches. Evaluation of activity types within complex projects and analysis of those evaluations may prove a useful mechanism for promoting documentation and sharing of unsuccessful experiences in conservation, as called for by Redford and Taber (2000) . Applying the CCF tools more widely and pooling the results would generate an invaluable dataset for promoting conservation learning and for testing hypotheses about predictors and determinants of conservation success both within and external to projects. For example, such a dataset could be used to explore whether similar factors determine the success of different activity types, to compare the success of different activity types in particular conditions or in different combinations, and to compare the conservation effects of different strategies for allocating resources. By combining the results of such evaluations with assessments of the importance of the conservation targets and the scale of the interventions, it may also be possible to extend the Zoo Measures Group work ( Mace 2007 ), and begin building a picture of overall conservation impacts. This should in turn provide further incentives for evaluating and sharing conservation experience. Conclusion We believe the CCF tools, which are now available for wider use ( http://www.cambridgeconservationforum.org.uk/measures_outputs.htm ), will help circumvent the shortage of experimental interventions and the reluctance of many organizations to share failures ( Redford & Taber 2000 ). They can help to overcome many of the constraints commonly associated with evaluation of conservation projects. Specifically, they help clarify conservation objectives, and provide a standardized framework that serves as a useful basis for assembling, managing, and using information about project outcomes and existing conservation experience. By identifying key outcomes that can predict conservation success and can be assessed in relatively short time frames, they help to make more efficient use of scarce monitoring and evaluation resources. With wide application, the CCF framework and evaluation tool can provide a powerful platform for drawing on the experience of past and ongoing conservation projects to identify quantitatively factors that contribute to conservation success. The capacity for rigorous analysis and synthesis should provide a strong incentive for evaluation to donors and practitioners alike. We encourage conservation practitioners to try these tools and thereby help build a systematic catalogue of conservation experience that can be used to establish ways of improving our performance. We believe that adopting a better measure of success is crucial to reducing the rate of conservation failure. Editor : Justin Brashares Acknowledgments A John D. and Catherine T. MacArthur Foundation grant to CCF ( http://www.cambridgeconservationforum.org.uk ) via the University of Cambridge supported this work. CCF member organizations supported the participation of their staff in the project. We thank the University of Cambridge Department of Zoology and UNEP‐WCMC for hosting project staff and meetings. The Conservation Measures Partnership, IUCN Programme Evaluation Group, Earthwatch, the BAT Biodiversity Partnership, and M Aminu‐Kano, M Ausden, E Ball, S Barnard, A Bowley, E Bowen‐Jones, R Brett, M Brooke, P Brotherton, P Buckley, N Bystriakova, D Coomes, B Cooper, B Dickson, J Doberski, E van Ek, J Ekstrom, L Evans, D Gibbons, M Green, R Green, A Grigg, D Hawkins, M Harris, P Herkenrath, A Hipkiss, G Hirons, R Hossain, F Hughes, J Hughes, J Hutton, C Ituarte, D Kingma, P Laird, A Lanjouw, P Lee, C Magin, T Milliken, R Mitchell, D Noble, S O'Connor, T Oldfield, K O'Regan, E Papworth, A Rodrigues, R Siregar, P Stromberg, B Sutherland, D Thomas, R Trevelyan, G Tucker, S Wells, J Williams, and M Wright provided helpful input.

Journal

Conservation LettersWiley

Published: Oct 1, 2008

Keywords: ; ; ; ; ; ; ; ;

There are no references for this article.