TY - JOUR AU - Buzato,, Silvana AB - Abstract Empirical place-based studies remain the research mode of most environmental field scientists. For their own sake and that of synthetic analyses based on them, such studies should follow rigorous, integrated frameworks for formulating, designing, executing, analyzing, interpreting, and reporting investigations. The inquiry cycle and applied inquiry cycle provide such frameworks: research questions complying with strict guidelines, research design following 17 detailed steps, and ordered sequences of reflections on data that begin with possible causes of their general tendencies and exceptions (outliers) and then consider possibilities involving other spatiotemporal scales. The applied inquiry cycle evaluates alternative place-based management guidelines. In these studies, reflection on results can lead to implementing the most promising alternative examined, monitoring the consequences, and engaging in adaptive management. The integration from start to finish and the numerous reality checks of the two frameworks provide field researchers with tools to carry out the best, or least flawed, field investigations possible. “We have no shortage of fabulous models and supercomputers; what we lack in many cases are good field data to plug into the models.” —Reed F. Noss (1996, p. 2) Empirical, local-scale field studies remain indispensable to continued progress in the environmental sciences (Noss 1996, Lindenmayer and Likens 2011, Fischer et al. 2011, 2012, Tewksbury et al. 2014, Ríos-Saldaña et al. 2018). On one hand, place-based field studies can advance significantly the understanding and the effective management of the landscapes where they take place (Lawton 1996, 1999, Noss 1996, Arlettaz et al. 2010, Ricklefs 2012, Ríos-Saldaña et al. 2018, Sutherland et al. 2019). On the other hand, they provide the raw material for synthetic (Ríos-Saldaña et al. 2018) or data-intensive (Elliott et al. 2016) analyses that simultaneously incorporate results from numerous place-based field investigations and are ever more common in the environmental sciences both basic and applied (Fischer et al. 2011, 2012, Carmel et al. 2013, McCallen et al. 2019). For example, data-intensive macroecological modeling fuels the exploration of large-scale patterns in basic ecology (Elliott et al. 2016, Connolly et al. 2017). Meta-analyses (Gurevitch et al. 2018) incorporate results from the numerous primary investigations that deal with a given theme in either basic or applied science. Documenting the accelerating, worldwide environmental changes linked to human activities and seeking realistic means to decelerate them (Ripple et al. 2017) involves data-intensive efforts involving place-based studies from a great variety of environmental sciences ranging from ecology to the social sciences (Eigenbrode et al. 2007, Anderson et al. 2015). Clearly, the reliability and scientific rigor of place-based empirical field studies should concern both the scientists carrying them out and others who later incorporate their results into computer-intensive synthetic studies. In principle, the reliability and scientific rigor of any empirical scientific investigation depend on how closely it follows the sequence of five steps diagramed in figure 1, beginning with a carefully framed research question and continuing through critical and far-reaching proposals catalyzed by the study's results. These proposals in turn may suggest new or even urgent directions for new investigations, or they can lead directly to applications. Figure 1’s scheme may appear simplistic and obvious. Many researchers in the environmental sciences—including but by no means limited to conservation biology, ecology, natural resource management of all kinds, protected area management, socioecology, agroecology, restoration ecology, and ethnobiology—follow it intuitively, and their publications incorporate it implicitly although seldom explicitly. We submit that following figure 1’s scheme is necessary for carrying out reliable, scientifically rigorous empirical field studies, but it is not sufficient. What is missing? Clear, detailed guidelines that are specific to each step but also are integrated across all five. Figure 1. Open in new tabDownload slide A proposal for the five steps of an empirical field study. Figure 1. Open in new tabDownload slide A proposal for the five steps of an empirical field study. In the present article, we propose the inquiry cycle (IC; figure 2a) and the applied inquiry cycle (ApIC; figure 2b) as thoroughly field-tested research frameworks that explicitly define and integrate all five steps of figure 1. These frameworks combine coherence, clarity, practical utility, and numerous explicit reality checks from start to finish. Many existing resources address one or more of the steps of figure 1 but to our knowledge none addresses all five. For example, influential texts such as Ford (2000), Morrison and colleagues (2001), Valiela (2001), and Pickett and colleagues (2007) principally address steps 1–3 (figure 1). These and many other resources stress that field experiments are the best means to answer research questions. Countless valid and urgent research questions in the field, however, do not lend themselves to controlled, randomized field experiments (Morrison et al. 2001). The design and interpretation of nonexperimental (observational) field studies present the investigator with many challenges not addressed even by such key texts as Mead (1988), Scheiner and Gurevitch (2001), and Quinn and Keough (2002). These challenges include the impossibility of controlling the multitudinous factors that impinge on the phenomenon studied (cf. Lawton 1999, Gilbert 2011), the near impossibility of truly replicating studies in different places or times because said factors and their effects vary notoriously in time and space, and the difficulty of interspersing no-longer-experimental units of different treatments (Hurlbert 1984, Lawton 1999, Shadish et al. 2002). Most guidelines of the IC and ApIC, however, apply equally to observational and experimental field studies while recognizing explicitly the critical differences between the two from step 1 (figure 1) through step 5 and beyond. As we will show, these guidelines incidentally provide novel perspectives on pseudoreplication (Hurlbert 1984), which continues to plague not only observational but also experimental studies in research fields throughout the natural and social sciences (Hurlbert 2009); remedy the erratic usage of the universally used terms sampling unit and experiment, which have led to frequent and severe problems in research design and interpretation; and provide creative alternatives to the unconsciously loose use of language that leads many researchers to make unfounded affirmations when extrapolating beyond their studies’ spatiotemporal boundaries (figure 1, step 5), a critical and frequent error scarcely debated in the environmental sciences or elsewhere (cf. Nuzzo 2015). Figure 2. Open in new tabDownload slide (a) The inquiry cycle, IC, and (b) the applied inquiry cycle, ApIC, modified from Feinsinger and colleagues (2010) and Feinsinger and Ventosa Rodríguez (2014). Figure 2. Open in new tabDownload slide (a) The inquiry cycle, IC, and (b) the applied inquiry cycle, ApIC, modified from Feinsinger and colleagues (2010) and Feinsinger and Ventosa Rodríguez (2014). Space limitations allow only a skeletal presentation of the IC and ApIC in the present article. We strongly urge interested readers to consult the supplemental materials for a much more complete presentation. Most examples in the article and in the supplemental materials involve conservation biology, ecology, natural resource management, and protected area management because our own experience involves those themes. Nevertheless, the IC and ApIC can be applied to empirical, place-based studies in any environmental science as long as the researchers respect the critical role played by natural history (Noss 1996, Willson and Armesto 2006, Ricklefs 2012, Tewksbury et al. 2014), which we will define in the present article, with caution (Martin 2020), to include the natural history of human communities. Our objective in this essay is to introduce seasoned and novice field researchers alike to the IC and ApIC as tools useful to conceiving and completing the best possible—or the least flawed—place-based investigations within their reach under real field conditions. Before elucidating those tools, however, it pays to review researchers’ different motivations for undertaking local-scale field research in the first place. Motivations for doing empirical field research Environmental scientists undertake empirical place-based field research with a variety of motivations ranging from purely intellectual endeavors to searches for the best possible solutions to practical problems. In the present article, we define three particular points along a continuum of motivations (see Stokes 1997). Later we cite a more detailed classification found in supplemental materials file S4. In theory-driven research, the researcher first becomes intrigued by a general theory or model and then seeks a study system providing the conditions for testing it. For example, the intermediate disturbance hypothesis (Connell 1978) has inspired numerous place-based studies—for example, those of Townsend and colleagues (1997) on stream macroinvertebrates in New Zealand and Molino and Sabatier (2001) on trees in French Guyana. Most texts offering advice on relating empirical studies to ecological paradigms promote theory-driven research (e.g., Ford 2000, Valiela 2001, Pickett et al. 2007). Alternatively, prior experience with a given system might have catalyzed the researcher's interest in theory but testing the latter, not curiosity about the study system in itself, remains the primary interest. In theory-driven research, concern about the study system's unique natural history seldom appears explicitly in steps 1 and 2, at least, of figure 1. Neglecting local natural history, however, will almost certainly lead to surprises, often unpleasant ones. Careful and open-minded observation of the system before framing the research question and designing the place-based study can only benefit a theory-driven investigation. In curiosity-driven research (Martínez et al. 2006), field observations catalyze the researcher's curiosity and generate an investigation to find out more. Theory infuses the research but the investigator's curiosity remains focused on the study system where the observation was made, not on its use only as a template for testing general theory. Noss (1996), Willson and Armesto (2006), Ricklefs (2012), Tewksbury and colleagues (2014), Ríos-Saldaña and colleagues (2018) and many others stress the critical role of knowing and respecting the system's natural history in these studies. Long-term studies on the ecology of place (e.g., McDade et al. 1994, Nadkarni and Wheelwright 2000, Billick and Price 2010) epitomize curiosity-driven, local-scale research in ecology. In solution-driven research, field observations provoke concerns about the system observed and generate novel ideas for better managing it or using its elements. Not all applied environmental science is solution driven. Many applied investigations document problems associated with anthropogenic perturbations of all sorts and analyze their severity but stop short of evaluating realistic alternatives for place-based solutions (Arlettaz et al. 2010). Such problem-driven research is motivated by curiosity and anxiety: How are those perturbations affecting the system observed or an element of it, such as an endangered bird species? Solution-driven research, however, explicitly evaluates alternative, realistic means of resolving or at least ameliorating the concern, as is detailed below. Implicitly or even unconsciously, theory always infuses solution-driven research but the latter remains closely focused on the local system and on developing, then implementing, guidelines to manage it better. Understanding natural history is crucial to solution-driven research (Sevillano et al. 2010, Tewksbury et al. 2014, Werling et al. 2014, Anderson et al. 2015). The investigations summarized by Altieri (1995) and Sutherland and colleagues (2019) exemplify place-based, solution-driven research. Origin and geographic scope of the IC and ApIC Developing courses on research design for field scientists throughout Latin America during the last quarter century has forced us to rethink the often vaguely defined frameworks that had previously guided our own theory-, curiosity-, or solution-based field research and to seek more explicit, commonsense, and coherent approaches useful to the diverse participants in these courses. With few exceptions, research interests of those participants—principally university faculty, other professional researchers, graduate students, protected area personnel, and agriculturists—have been curiosity- or solution- driven rather than theory-driven. Few of their place-based research questions, apart from those of agriculturists and some natural resource managers, could be answered through controlled, randomized experiments. Texts that concentrated almost exclusively on experimental design (e.g., Mead 1988, Schneiner and Gurevitch 2001, Quinn and Keough 2002, Shadish et al. 2002) did not address many of the real-life challenges to designing robust observational studies. Somewhat to our surprise we discovered early on that the hypothetico-deductive scientific method (Krebs 2000, Elliott et al. 2016), which we had always implicitly assumed was framing our own research, helped little with the formulation of research questions, step 1 of figure 1, and not at all with steps 2–5 (Feinsinger 2013). Instead, a quite different approach to field studies, addressing and integrating all five steps, began to evolve. A primitive version of the IC was originally proposed for use outside academic science (Feinsinger et al. 1997). This IC should not be confused with the variety of approaches, mostly to K–12 pedagogy, that have employed the label inquiry cycle since then (Pedaste et al. 2015). The IC, still primitive, and a primitive ApIC made cameo appearances in a text on field research design (Feinsinger 2001). In the years that followed, thanks in great part to constructive criticisms and insights of numerous course participants with many years of field experience between them, the IC and ApIC evolved into detailed frameworks for place-based environmental field studies undertaken by experienced research professionals, their students, and protected area personnel among others. These developments were published in Spanish (e.g., Feinsinger 2004, Feinsinger and Ventosa Rodríguez 2014; see supplemental materials file S1) not because the IC and ApIC function only in Latin America but rather to develop written resources for the publics with whom we interacted the most. These frameworks can function anywhere, of course. In the present article, we hope to bring the mature IC (figure 2a) and ApIC (figure 2b) to the attention of environmental field scientists and their students whatever the country, native language, and local landscape. Curiosity-driven research and the inquiry cycle “Scientists… tend to take the thought processes that drive their research for granted.” —E. David Ford (2000: p. 6) The three steps of the inquiry cycle (figure 2a) reflect the scheme in figure 1: research question (figure 1’s step 1), action (steps 2–4), and reflection (step 5). As we will discuss later, the following elucidation also applies to theory-driven research and, with some important changes, to the applied inquiry cycle for solution-driven research (figure 2b). The research question: Introduction At the outset of the IC, the researcher formulating the research question (Ford 2000, Valiela 2001) must ponder explicitly and articulate clearly the thought processes that led up to it, as the straightforward example in box 1 indicates. Box 1. The research question proposed by Cuban protected area personnel during a 2012 course, one version for discrete levels (A) and one for continuous levels (B) of the design factor (see the text). Observation In the national wildlife refuge Delta del Cauto, southeastern Cuba, white ibis (Eudocimus albus) nest both in monospecific and mixed-species colonies. Reproductive success appears to vary widely among ibis nests. Conceptual construct Colonial-nesting waterbirds often but not always achieve higher reproductive success in mixed-species groups than in monospecific groups because of the formers’ greater diversity of “early warning systems” and predator-deterring behaviors. Place-based curiosity In the national wildlife refuge Delta del Cauto, might white ibis that nest with other waterbirds in mixed-species colonies enjoy better protection from predators and thereby achieve greater reproductive success than white ibis nesting in monospecific colonies? Research question (A) In the national wildlife refuge Delta del Cauto, southeastern Cuba, during the breeding seasons of 2013–2018 how does mean number of young fledged per egg laid in white ibis nests (Eudocimus albus) vary between mixed-species and monospecific nesting colonies? Or (B) in the national wildlife refuge Delta del Cauto, southeastern Cuba, during the breeding seasons of 2013–2018 how does mean number of young fledged per egg laid in white ibis nests (Eudocimus albus) vary among waterbird colonies having different proportions of ibis nests among the total? Observation In the field, the environmental scientist notes a measurable, biologically or socioecologically significant feature that appears to vary among recognizable entities (e.g., individuals or habitat patches) or simply among points in space and time that display different conditions. Conceptual construct The observation triggers a theoretical concept—a conceptual construct (Pickett et al. 2007)—that already exists in the researcher's mind because of academic training or natural history knowledge. The conceptual construct may be a paradigm (Graham and Dayton 2002), a generalization (Pickett et al. 2007), a hypothesis such as the intermediate disturbance hypothesis (Connell 1978), or simply a combination of field experience and common sense. Whatever its origin, the conceptual construct has global reach (Krebs 2000, Pickett et al. 2007). It is not by any means restricted to the particular time, place, study system, species, or community where the observation was made. It allows for exceptions, recognizing the contingency, variability, and historicity (Loehle 1987, Lawton 1999, Pickett et al. 2007, Price and Billick 2010, Grant and Grant 2010, Gilbert 2011) that characterize most ecological and socioecological phenomena (see Anderson et al. 2015). The conceptual construct often takes the form of “In general, A can cause B,” where A may be a single causal factor or a combination of several. Place-based curiosity The combination of local-scale observation and global-scale conceptual construct sparks the researcher's curiosity about the system where the following observation was made: “Could A have been the cause of the conspicuous variation in B observed here?” This statement of curiosity, however, is not yet suitable as a research question. Whether or not A, acting in the past, is responsible for today's variation in B cannot be answered directly by research undertaken today. Without travelling backwards in time, the investigator carrying out a nonexperimental study cannot unequivocally evaluate A's role. Neither can the investigator who carries out today a fully controlled, randomized field experiment where A is the treatment factor, who is really addressing a very different sort of query: “Today, would A lead to significant variation in B in this system if all other possible influences on B were controlled?” The field researcher who fails to articulate place-based curiosity, distinguishing it explicitly from the global-scale conceptual construct on one hand and the research question proper on the other, risks falling into the logical trap of affirming the consequent—affirming that the operation of A in the past indeed caused the variation observed today in B—whether the research carried out today is experimental or not. One of the most frequent problems that we have encountered in published studies, research proposals, and graduate theses is the assumption, often unconscious, that a query articulating place-based curiosity already qualifies as an adequate research question. As the next paragraph shows, it does not. It cannot yet guide research design or lead to a concrete answer. Furthermore, the same place-based curiosity can lead to numerous truly answerable research questions. The research question: Guidelines The research question proper must meet five criteria detailed in Feinsinger (2014) and Feinsinger and Ventosa Rodríguez (2014; see supplemental materials file S1). The question must be directly answerable by recording data from the system where the original observation was made. The place-based curiosity “In the national wildlife refuge Delta del Cauto, might white ibis that nest in mixed-species aggregations enjoy better protection from predators and achieve greater reproductive success than white ibis nesting in monospecific groups?” (box 1) does not yet comply with this criterion. Reproductive success could be quantified if defined more precisely, but protection from predators could not. This place-based curiosity could actually catalyze any number of well defined, truly answerable research questions in addition to those of box 1’s final step. The question must be comparative (cf. Valiela 2001, Pickett et al. 2007, Gurevitch et al. 2018). Controlled, randomized experiments provide data from experimental units that the investigator has subjected to different levels of the treatment factor. Observational (nonexperimental) studies provide data from units that already display their different, preexisting conditions (see the glossary in supplemental materials file S2). Data yielded by noncomparative questions such as avian life histories are obviously of great importance to natural history but generally provide few avenues for speculating on possible causal factors, relating results to theory, or wondering about other contexts and scales. Contrast, for example, the two research questions in box 1 with the following noncomparative question: “In the national wildlife refuge Delta del Cauto, southeastern Cuba, during the breeding seasons of 2013–2018 what is the mean number of young fledged per egg laid in white ibis nests (Eudocimus albus) that are located in mixed-species colonies?” Such a question could provide data valuable and interesting in themselves. It would fail, however, to address the place-based curiosity or provide openings for the far-reaching reflections that would result from a comparative question involving a slightly more ambitious study within the same landscape. The question must be capable of producing an interesting answer that satisfies the researcher's place-based curiosity at least in part. The answer should not have been obvious from the start. For example, eliminating the word how from the first research question in box 1 leaves this: “In the national wildlife refuge Delta del Cauto, southeastern Cuba, during the breeding seasons of 2013–2018 does mean number of young fledged per egg laid in white ibis nests (Eudocimus albus) vary between mixed-species and monospecific nesting aggregations?” To answer the question, researchers need not collect a single datum. The answer is undoubtedly “Yes, the number varies,” simply because the means of two samples will always differ, at least microscopically, even when obtained from the same statistical population (Johnson 1999). Simply adding how to the question in box 1, however, requires that the researchers invest time and effort in designing a sound study, amassing quantitative data, and later pondering the biological significance of the results. The question must be simple and direct, phrased—or at least easy to phrase—in straightforward language understandable to intelligent local laypersons. This criterion requires that field researchers be willing and able to eschew technical language that no one outside their specialty can understand and then to engage in a productive interchange of ideas with the local community. This criterion has the additional benefit of requiring researchers to clarify their own ideas (see Pinker 2014). In our experience researchers who have trouble complying with this guideline do not themselves really understand what they are doing or why. In the present article, any element of box 1’s research questions that seems overly technical for some laypersons could easily be reworded. The question must be coherent with the three thought processes that preceded it. That is, the initial observation should foreshadow the elements of what will be compared (criterion 2) and what will be recorded in each unit compared (criterion 1), as was specified by the research question. The cause–effect proposals of the conceptual construct and place-based curiosity should also be logically related to those elements. The real-life example in box 1 complies with this criterion. Compliance with all five criteria is critical, for the research question's exact wording will condition study design, data analysis and presentation, and the interpretation and extrapolation of results (Feinsinger and Ventosa Rodríguez 2014; supplemental materials file S1). Almost always the most concise, precise, and effective phrasing for complying with criteria 1 and 2 (see the glossary in supplemental materials file S2) is “How does Y (what to record: the response variable) vary among units i of the different levels (conditions, treatments) of X (the design factor or, in randomized experiments, the treatment factor)?” (Feinsinger 2014). Note the change of symbols from A and B in the conceptual construct to X and Y in the present example. Y is often nearly synonymous with B although more precisely defined, but X is synonymous with A only in a truly randomized, controlled experiment. Also note that this wording ensures that the research question is open ended, in contrast to a directional prediction derived from a directional scientific hypothesis. We return to the significance of this distinction at the end of this essay. “Most people go into the field with only a hazy impression of what they will do, how they will analyze it, and why they are doing it.” —William J. Sutherland (2006: p. 1) Action: Designing the research The preceding section's guidelines ensure that field researchers will confront and resolve the third concern in the quote from Sutherland (2006), “why they are doing it,” but do not yet resolve the “hazy impression” regarding what they will do and how they will analyze it. Investigators must thoroughly dispel that haze before embarking on the field study proper. They should design all aspects of the research before recording the first datum of the study proper (cf. Nuzzo 2015), only then embark on data collection by faithfully following the design, and on completing the study analyze and present the results with great care (figure 2a, 2b). Research design has two complementary definitions (Feinsinger and Ventosa Rodríguez 2014; see also Boitani and Fuller 2000, Ford 2000, Valiela 2001, Sutherland 2006). (a) Design is the process of adjusting data collection to what the research question's wording dictates—or of adjusting the research question's wording to what data collection permits. (b) Design is the search for the means to obtain the most faithful portrayal of what it is that you truly wish to know. At first glance these definitions may seem simplistic and trivial. On applying them to research proposals and publications, however, readers may find—as we have found—a surprising number of cases where research designs were inadequate to answer the questions posed or where they provided unfaithful portrayals of what researchers wished to know. As will be illustrated later, keeping the two definitions in mind when designing research can significantly benefit environmental scientists’ field investigations. The complete process of research design for the IC and ApIC consists of seventeen steps (supplemental materials file S3) that apply to randomized, controlled field experiments as well as observational studies. Dealing conscientiously with the conceptual or practical challenges of a single step often leads to an adjustment to the research question. Dealing conscientiously with the challenges of all seventeen almost always produces a thoroughly rewritten research question considerably more precise and realistic than the original version that “only” needed to comply fully with the five guidelines just discussed (Feinsinger and Ventosa Rodríguez 2014; supplemental materials file S1). In the present article, we briefly highlight three of many critical issues addressed during this process. Take time and space into account. Researchers should compare the time span available for the study with the pace of the ecological or socioecological processes involved in the conceptual construct and place-based curiosity, then ponder how and if they will be able to interpret or apply objectively the results (cf. Guerrero et al. 2013). For example, numerous studies, usually lasting a year or less, examine the relation between selective logging and avifauna in tropical and subtropical forests. Most such studies compare primary forest control groups with forestry concessions logged at a given moment in the past (Barlow et al. 2007: table 1); few studies include logged plots of different ages. Ecological succession following selective logging or another disturbance is not instantaneous by any means, whereas a decade or two after selective logging the evidence that logging took place may scarcely be detectable. Therefore, researchers comparing unlogged control forests with just-logged concessions will most likely overestimate the medium- and long-term effects of selective logging on forest birds, whereas researchers comparing control groups with concessions logged more than 20 years previously will most likely underestimate logging's immediate and short-term effects. Whatever the pace of the underlying processes involved, the research question for any field study must explicitly state the study's temporal and spatial limits, as in box 1’s example. This critical detail helps to eliminate unfounded assertions during the second phase of reflection (figure 2a, 2b; see below). Distinguish Y from X, detail X. Earlier, we proposed that most research questions might best follow the generic form: “How does Y (the response variable) vary among units i of the different levels of X?” where X is the design factor or, in controlled, randomized experiments, the treatment factor (supplemental materials file S2). The researcher must explicitly separate all design elements that pertain to X from those that pertain to Y, and fully define those aspects of X before giving any further thought to Y. To define X, the investigator specifies the design factor or factors (in experiments, the treatment factor or factors) and next the levels (experimental treatments) themselves, or all possible combinations of levels if working with ≥ 2 factors. Levels of the design factor may be discrete or continuous (supplemental materials files S2 and S3). Research question A in box 1 specifies two discrete levels: mixed-species nesting colonies and monospecific colonies. Question B specifies continuous levels. Either choice requires a large number of colonies, many replicates for each discrete level (question A) or a variety of colonies each with its unique level of the proportion of white ibis nests among the total in the aggregation (question B). Each nesting colony is one unit i of the comparison, that is, the comparison unit (response unit in Feinsinger 2001). When defining the comparison unit and ensuring its biological or socioecological independence from neighboring units, environmental field scientists must pay strict attention to the point of view of the organisms or other phenomena involved whether the study is observational or experimental. Sometimes X presents easily recognizable comparison units i—for example, “nesting colonies of only white ibis” and “nesting colonies of white ibis and other waterbird species” (box 1, question A). By sampling only colonies with adequate minimum distances from one another, for example at least 200 meters or whatever their natural history experience suggests, investigators can achieve reasonable biological independence among their comparison units. In numerous environmental field studies both observational and experimental, however, X presents researchers with a continuum that lacks recognizable comparison units. On encountering that X presents a continuum where comparison units i do not stand out, researchers often sidestep the issue and assume implicitly that the “sampling units” that they employ for the very different purpose of obtaining Y (see below and Hurlbert 1990) will serve just fine as comparison units for X. This confusion can have serious consequences to data analysis and interpretation, as will be discussed later. Instead, if X does not present easily recognizable comparison units researchers must define comparison units arbitrarily (box 2), decide on their approximate spatial or temporal dimensions, and impose a minimum distance or time lapse between one and another arbitrary comparison unit so as to achieve biological or socioecological independence to the extent possible (box 2, supplemental materials files S1–S3). Box 2. The logic of always defining and identifying comparison units even when they are invisible (modified from Feinsinger and Ventosa Rodríguez 2014). The design or treatment factor—or each design or treatment factor if there are at least two—consists of different levels, either at least two discrete levels or a potentially infinite number of continuous levels. There must be many comparison units, independent among themselves at the scale of the research question. If the levels are discrete, there must be many independent comparison units for each one or for each combination of levels if there are at least two design or treatment factors. Those comparison units that share the same discrete level, or the same combination of levels, are replicates. If the levels are continuous, there must simply be many comparison units spanning the full range of possible levels. The term replicate does not apply. Therefore, comparison units must be clearly defined and identified. If what you are comparing already presents natural comparison units that are clearly recognizable, the only requirement is that they be biologically or socioecologically independent from one another. If what you are comparing does not present natural and clearly recognizable comparison units, you must define and identify arbitrary comparison units in space or time according to three criteria: natural history (in particular, the spatial and temporal points of view of what it is that you are studying), common sense, and biological or socioecological independence among the comparison units at the scale of your research question. Having explicitly defined the nature of comparison units whether natural or arbitrary, next the researcher sketches their provisional layout in space or time (supplemental materials files S1–S3). The comparison units of different levels, whether discrete or continuous, must be truly interspersed (Hurlbert 1984) so as to minimize the risk of tainting data analysis and interpretation with confounding factors (Mead 1988). If the mixed-species and monospecific nesting colonies in the Delta del Cauto are interspersed naturally (box 1), investigators can easily select the comparison units to evaluate by sampling randomly—but randomly with eyes open. Blind random sampling could lead to comparison units that are segregated unnecessarily in space (Hurlbert 1984: figure 1, B1–B3), such as the unlikely but possible outcome that all mixed-species colonies selected are on one shore of the bay and all monospecific colonies on the other. Sometimes comparison units are naturally segregated in space or time, however, and interspersion is not possible. What if all mixed-species colonies of waterbirds with white ibis nests (box 1) really did occur on only one shore of the Delta del Cauto and all monospecific colonies only on the opposite shore? With respect to box 1’s question A, the colonies of a given level would no longer be replicates but rather pseudoreplicates as defined by Hurlbert (1984: figure 1, B1–B3), the bane of innumerable studies in the environmental and social sciences (Hurlbert 1984, 2009, Davies and Gray 2015, Colegrave and Ruxton 2018, Spurgeon 2019). In such cases, which are more common than environmental scientists might wish, pseudoreplication can be overcome—with a cost—by referring to the first definition of study design: Adjust the question's wording to what data collection permits. In this example, the adjustment to question A (box 1) required by the unavoidable spatial segregation of mixed-species and monospecific colonies would be the following: “In the national wildlife refuge Delta del Cauto, southeastern Cuba, during the breeding seasons of 2013–2018 how does mean number of young fledged per egg laid in white ibis nests (Eudocimus albus) vary between mixed-species nesting colonies on the north shore of the Delta del Cauto and monospecific nesting colonies on the southeastern shore?” With this rewording, monospecific nesting colonies on the southeastern shore and mixed-species colonies on the northern shore are no longer pseudoreplicates. They are true replicates of two levels of the design factor, but these are now the level “mixed-species nesting colonies on the north shore” and the level “monospecific nesting colonies on the southeastern shore.” The cost? Investigators must recognize explicitly that they are comparing nesting colonies not only of different species makeup but also of two landscapes that differ in vegetation, exposure to prevailing winds, and almost certainly numerous other features likely to affect nesting success, such as the abundance of nest predators. In essence they are comparing apples and oranges. Any publication or report on this study should clearly emphasize this caveat throughout, from the wording of the title and abstract through the final paragraph of the discussion (see below). Segregated layouts can occur in time as well as space. Consider the following question: “In this evergreen subtropical forest, how do age distribution and overall abundance of rodents of species S captured in Sherman live traps vary between weeks in the rainy season and weeks in the dry season of 2026?” Researchers can easily identify comparison units as different weeks in the rainy season separated from one another by a lapse of several days, and different weeks in the dry season likewise separated from one another, but interspersing the comparison units is clearly not possible. In short, when investigators whose original research questions had assumed that comparison units could be interspersed in time and space find that these are actually segregated, they should adjust the research question to this unwelcome reality and then employ the term segregated layout (Hurlbert 1984) without mentioning pseudoreplication or dependence (Colegrave and Ruxton 2018). The last two terms are now obviated by the adjustments to the question. Distinguish Y from X: detail Y. All elements of X are now in place: design factor and levels, definition of comparison units, and provisional layout. At this point, but not before, the researcher focuses attention on the response variable Y: On arrival at a given comparison unit i, what should be recorded and by what means, when, where, and how many times before moving on to another comparison unit? In short, to characterize a given comparison unit i by its particular value of Y the researcher employs a selected field method once or more than once to obtain a datum or various data, respectively (Hurlbert 1990, Feinsinger 2001, Valiela 2001). When and where? In the evaluation unit or units, whose concept and definition are entirely distinct from those of the comparison unit (Hurlbert 1990). The texts indicated in supplemental materials file S1 discuss in depth the criteria for choosing the response variable and the field methods used to obtain it, always keeping in mind the second definition of research design: the search for the means to obtain the most faithful portrayal of what it is that you truly wish to know. Steps 9–12 in supplemental materials file S3 bolstered by the glossary in supplemental materials file S2 summarize that discussion. In the present article, we discuss only the concept of evaluation units, the importance of distinguishing them from comparison units, and the urgent need to move beyond the imprecise language that has so often confused the analysis and interpretation of field data (Hurlbert 1984, 1990). At first glance, evaluation units in some field studies may seem to coincide with comparison units. Consider the simple socioecological research question “In the Andean community C, October–November 2026, how do perceptions on conservation versus elimination of spectacled bears (Tremarctos ornatus) vary between farmers 25–45 years of age with cornfields, which are often invaded by hungry bears, and neighbors in the same age group whose crops are not attractive to bears?” The comparison unit is clearly a farmer of one or the other group. The evaluation unit is much more precise. It is the farmer at the moment of the interview. Tomorrow the farmer will still be the same person (comparison unit) but his perceptions may have changed overnight. Now consider the simple question “In this lemon tree exposed to prevailing western winds, on this date how does damage by invertebrate herbivores vary between leaves in the exposed western quadrant (225°–315°) and those in the sheltered eastern quadrant (45°–135°)?” The comparison unit is clearly a leaf in one or the other quadrant. The evaluation unit is that leaf at the moment the datum on herbivore damage is recorded. By the next morning the same leaf (comparison unit) could accumulate additional damage. Often, however, not only temporal but also spatial dimensions of evaluation units are clearly much smaller than those of comparison units. In such cases, researchers frequently subsample. They employ multiple evaluation units within a single comparison unit i so as to obtain a representative value of Y for that comparison unit as a whole. In box 1’s study the comparison unit, a given waterbird nesting colony, will include a number of white ibis nests. A given white ibis nest within that colony, from egg laying through fledging, is an evaluation unit and provides one datum: the number of young fledged per egg laid in that nest. Will this datum provide a representative value of Y across the entire set of white ibis nests in that colony, which is the comparison unit? Almost certainly not. To obtain that representative value, investigators will need to evaluate reproductive success in many white ibis nests from that colony; that is, they must subsample. Evaluation units are not always as easy to pin down as a farmer at a given moment, a leaf at a given moment, or a white ibis nest from egg laying to fledging. Researchers comparing large, visibly distinct patches of logged and unlogged tropical forest (X) and quantifying seedling density for a keystone tree species in each (Y) will also need to employ much smaller evaluation units within each forest patch, but these will be arbitrary—for example, seven 100-square-meter parcels (i.e., seven subsamples) or seventeen 40-square-meter parcels (i.e., seventeen subsamples) randomly dispersed within each patch. Other arbitrary evaluation units commonly used in field ecology and conservation biology, for example, include the area or volume sampled by transects (whatever the biota involved), cylindrical soil samples of standard volume for soil invertebrates, Surber or kick net samples for benthic macroinvertebrates, the air space sampled by one mist net during one morning for birds or during one night for bats, the ground area sampled by two intersecting lines of pitfall traps for terrestrial arthropods or herpetofauna, and the forest understory area sampled by lines of camera traps for midsize and large terrestrial mammals. None of the arbitrary evaluation units just listed, whose purpose is to obtain Y, qualifies as a comparison unit of X. Nevertheless, statements such as “we used sampling grids [of mist nets to capture bats] as the unit of replication” (Peters et al. 2006) and other unintentional confusions between evaluation units and comparison units abound in the literature. Furthermore, confusion between multiple evaluation units (subsamples) and multiple comparison units, the most common form of pseudoreplication (Hurlbert 1984: figure 5), characterizes innumerable environmental field studies. These include most studies on effects of selective logging on fauna (Ramage et al. 2012) and other large-scale phenomena where obtaining many comparison units is challenging (Barlow et al. 2007, Davies and Gray 2015). This confusion, however, also shows up in many studies on smaller-scale phenomena where obtaining many true comparison units would actually have been feasible (Hurlbert 1984, 2009, Hurlbert and White 1993, Hurlbert and Meikle 2003, Spurgeon 2019). If your research question is “in May 2027 how does height vary between adult males and females of this giraffe population in Kenya?” diligently measuring the height of one female 99 times and that of one male 99 times does not provide you with a sample size of n = 99 for each gender. The blame for the confusion between comparison units and evaluation units lies with language, not researchers. The ubiquitous term sampling unit and its synonyms (e.g., monitoring unit; Davies and Gray 2015) are the principal culprits. Some authors (e.g., Manly 1992, Boitani and Fuller 2000, Davies and Gray 2015, Colegrave and Ruxton 2018) use “sampling unit” or a synonym in the sense of the comparison or experimental unit i of X. Others (e.g., Hurlbert 1984, Sutherland 2006, Spurgeon 2019) use the term in the sense of the evaluation unit used to obtain Y. Still others use “sampling unit” and its synonyms indiscriminately. For example, in the classic and highly influential book by Krebs (1998) “sampling unit” most often refers to what is clearly an evaluation unit, such as a quadrat or a fixed volume of water. At other points in the text, however, “sampling unit” refers instead to a comparison or experimental unit, and sample size n is defined twice in the text as the number of sampling units chosen from the statistical population of N sampling units. We propose that field researchers in all environmental sciences and others discard the term “sampling unit” (in Spanish, unidad de muestreo) and differentiate explicitly between the comparison units for X and the evaluation units for Y, never to confuse the two again. Clearly defining comparison units (even if—or especially if—these must be arbitrary) and deciding provisionally on sample size and spatial or temporal layout before beginning to think about evaluation units should help to lower the frequency of the pseudoreplicated designs highlighted by Hurlbert's (1984) figure 5. Action: Analyzing and presenting the data Deciding on the means of data analysis and presentation must be provisionally planned during the design process (figure 2a, 2b), not during or after the field study. The choice to subsample or not, just discussed, will play a critical role. At the end of field work, if no subsampling has taken place the single datum obtained from the unique evaluation unit for each comparison unit i already provides the value of Y for that comparison unit. If subsampling occurred, however, the datum obtained from one evaluation unit is a basic datum, not yet Y (supplemental materials file S2). The investigator must calculate a summary statistic, the derived datum (not to be confused with the “derived variable” of Valiela 2001) across all evaluation units of a given comparison unit. The derived datum then represents Y for that comparison unit i as a whole. In short, each comparison unit is characterized by one datum, and only one, for a given response variable Y (Hurlbert 1984, Spurgeon 2019) whether that datum comes from a unique evaluation unit or many subsamples. Of course, some research questions specify two or more response variables Y that differ in biological or socioecological significance and that can be calculated from the same basic data. Consider an herbivory study in agroecology where a crop plant is the comparison unit, subsampled by many leaves each providing a basic datum on percent of tissues damaged or removed by invertebrate herbivores. Choices for the derived datum Y to characterize the comparison unit (plant) as a whole include frequency of herbivory (proportion of leaves damaged), intensity of herbivory (mean or median of tissue removed or damaged, across leaves), or variability in intensity (coefficient of variation in tissue removed or damaged, across leaves). Each derived datum represents a different facet of the plant–herbivore relationship. The remainder of data analysis is focused on the heart of research question: the values of Y among the comparison units of the different levels of the design factor(s) X (Feinsinger and Ventosa Rodríguez 2014; supplemental materials file S1). Whether levels of the design factor are discrete or continuous, the graph—or graphs in the herbivory example above—must present that data point for each and every comparison unit i (Weissgerber et al. 2015, Matejka and Fitzmaurice 2017). After all, the research question is “How does Y vary among comparison units i of the different levels of X?.” It is not “On average, how does Y vary among the different levels of X?” or even “On average (± the standard deviation), how does Y vary among the different levels of X?” A single data outlier can generate new research directions, as is emphasized in the following subsection. “Reflection is vital for questioning assumptions and learning from experience.” —Joern Fischer and colleagues (2012: p. 473) Reflection on the data and well beyond Reflection in the IC (figure 2a) should resemble the well written, thorough, and critical discussion section of an exemplary publication, thesis, or report and vice versa. The investigator prepares for this step by reviewing the graph or graphs of the data, one datum of Y per comparison unit i, with two questions in mind. What, if any, were the notable tendencies among the data (Y) with respect to the design factor X? And, what were the most notable exceptions to those tendencies—that is, the data outliers? This contemplation of the data catalyzes an ordered sequence of reflections on all possible causes of said tendencies and outliers, followed by creative but nuanced extrapolations to future times and, in the IC, to much broader spatial scales and different contexts as well. The diverse proposals of this vital step (figures 1 and 2) are speculative and their language must reflect this. Why? Because “there is reason to doubt any affirmation that is not the direct result of a finished work” (Rodríguez 1840, translated by the authors). Investigators cannot ethically make definite affirmations beyond the “finished work,” which consists of the particular sample of data taken within the explicitly defined spatial and temporal boundaries of the study (Feinsinger 2012, 2014). Instead, reflections must employ wording such as we propose that, most likely, possibly, probably, almost certainly, could, might, could have, and might have. Admitting to uncertainty in no way diminishes creativity. On the contrary, it encourages creativity and more: The arrow that connects the reflection and question steps in figure 2a indicates that nearly every reflection can and should generate ideas for new research. In the present article, we discuss that arrow in figure 2a and later its analog in figure 2b. The conceptual construct and place-based curiosity behind the original research question had proposed that A could be the causal factor or factors responsible for the variation in B that was noted by the investigator. The response variable Y in the field study proper represents a precisely defined variant of B. The study's design factor X represents A if and only if the study was experimental. Now that results are in, are the tendencies of values of Y among comparison units with different levels of X those to be expected if A were indeed the causal factor? If so and if the data resulted from a truly controlled, randomized experiment with A as the treatment factor X, the investigator can legitimately affirm that yes, A caused the tendencies among the data per se (Shadish et al. 2002). If the study was observational, the investigator can only propose that A might have been the causal factor. Uncertainty always exists. Therefore, the researcher must also ponder what factors other than A, phenomena not touched on in the conceptual construct and place-based curiosity, could have caused those same tendencies (cf. Nuzzo 2015). And what if the data's tendencies are not those to be expected if A were operating? The researcher must ask, why not? In a controlled, randomized experiment, a strong experimental design should have minimized the intervention of factors other than A and the researcher's conclusion might simply be that A scarcely affects B in this particular system, place, and time. In an observational study, however, reflection must consider all other phenomena (possible causal factors) that might have overwhelmed or contradicted the effect, if any, of A on the results. Creative and wide-ranging reflection on possible causes for results that lack strong tendencies, and especially for results with strong tendencies contrary to those expected if A were operating, should spark novel ideas that lead to entirely new research directions. Results that fail to comply with conventional wisdom are not by any means failures (cf. Catalano et al. 2019). Reporting such results is just as important as or more important than reporting results that meet conscious or unconscious expectations, whether the field study is curiosity-driven, theory-driven (Loehle 1987, Nuzzo 2015), or solution-driven (Catalano et al. 2019). What about individual data that buck the general tendency? Data sets from field studies often include conspicuous outliers. In controlled, randomized experiments outliers are simply outliers, and they rarely if ever catalyze reflection on their particular causes. In observational field studies, however, researchers must ask what might have been special about the particular comparison unit i having an exceptional value for Y. For example, if reproductive success per ibis nest in one mixed-species nesting colony (box 1, research question A) far exceeded that of other mixed-species colonies, rather than ignoring this outlier—or, worse, erasing the data point (see Farji-Brener 2009)—researchers should reflect on the possible causes of this unique result. Novel ideas and new research directions spring up here as well. The final reflection that focuses on why the study's data might have come out as they did reviews how those data were obtained. Whether their studies were observational or experimental, researchers—and not just field researchers—should always consider the possibility that hidden flaws in the research design could be at least partly responsible not only for outliers but also for the general tendencies—or the lack thereof—among the bulk of the data. Data whose tendencies resemble those to be expected if A were truly the causal factor tend to lull us into complacency, so this conscious, self-critical reflection is especially important in such cases (Nuzzo 2015). A careful and detailed research design that includes a pilot study (one of the 17 steps for the IC and ApIC; see supplemental materials file S3) reduces but does not eliminate the possibility that design artifacts influenced the data—for example, the artifacts unintentionally introduced by many commonly used field methods or by many conventional evaluation units for obtaining Y (supplemental materials files S1 and S3). Reflecting on that possibility encourages researchers to propose improvements for future studies. Now the IC's reflections enter a second phase. They move beyond the data from the spatiotemporally circumscribed study proper, using those data and the reflections focused on them as vehicles for considering what might happen in contexts and at scales different from those of the study and integrating the creative proposals that result with literature as appropriate. The language used must be chosen as least as carefully as in the first reflection phase so as to avoid making unfounded affirmations well beyond the explicit spatiotemporal limits of the study proper. That is, these reflections must clearly acknowledge the “flux of nature” (Pickett et al. 2007): the contingency, variability, and historicity of most ecological and social phenomena (Loehle 1987, Lawton 1999, Gilbert 2011). In ecology and socioecology, this second phase of reflection might touch on, for example, the same local landscape in the future; the same landscape but at coarser spatial scales; other landscapes entirely (e.g., those with very different species compositions), vegetation formations, forms and intensities of disturbance, latitudes, climates, ethnic groups, or stakeholders of all kinds; and other species or societies. These wide-ranging reflections generated by an empirical place-based field study (see Montjeau et al. 2015) may not only generate interest among field investigators from many parts of the world but also inspire them to undertake novel investigations in their own landscapes. Theory-driven science and the inquiry cycle The IC can usefully be applied to theory-driven as well as curiosity-driven field science. If theory, not a prior field observation, has catalyzed the research the investigator need only invert the first two elements of formulating the research question (figure 2a). The conceptual construct (theory) leads off, followed by the observation that “System Z meets the criteria for evaluating this theory.” Place-based curiosity is whether or not the cause–effect relationship proposed by theory indeed operates in system Z. The guidelines discussed above for the research question proper, study design, data analysis, and reflection do not change with one exception: both the first phase of reflection (data) and the second (extrapolation) are focused, although not exclusively, on the theory that drove the research (cf. Pickett et al. 2007). Solution-driven science and the applied inquiry cycle “Repeatedly, managers have noted that while much of conservation science is useful for alerting us to potential and actual problems, often we lack results precise enough to help differentiate which actions are necessary and most likely to succeed.” —Martha Groom and colleagues (2006: p. 667) The basic data resulting from theory- and curiosity-driven empirical field studies, including problem-driven applied research as defined above, may turn out to be useful to place-based management albeit indirectly (see supplemental materials file S4). In contrast, the thousands of works cited by Sutherland and colleagues (2019) and many more intentionally seek data that will provide local solutions to particular local concerns through place-based field studies that evaluate the effects and effectiveness of realistic alternatives for management guidelines. The ApIC (figure 2b) provides investigators with the tools to enhance the scientific rigor of any place-based, solution-driven field study in the environmental sciences and, therefore, the reliability and local applicability of its results. Box 3 presents the straightforward research question that initiated one such investigation and exemplifies that same goal common to all studies using the ApIC: a local solution to a local concern (Feinsinger and Ventosa Rodríguez 2014; supplemental materials files S1 and S4). The first two steps of the ApIC (figure 2b) are identical to those of the IC (figure 2a) except that “place-based concern” replaces “place-based curiosity” when formulating the research question. In the action step, adhering to all guidelines for research design is if anything more crucial to studies following the ApIC than to those following the IC. Flawed research designs or analyses in solution-driven field studies can have serious real-world consequences if those studies influence management decisions (Boitani and Fuller 2000, Ramage et al. 2012). Reflection in the ApIC begins with the same exhaustive, critical review of the data obtained and their possible causes as in the IC but then diverges (figure 2b). In place of the IC's wide-ranging speculations on what might happen in other landscapes and other contexts, the second phase of the ApIC's reflection keeps the spatial focus primarily on the place of the field study. The goal of the ApIC is to resolve the local concern, not to speculate on possible management guidelines for other places. Even the most carefully couched reflections on what might result from applying one or another of the management guidelines evaluated here to another landscape there increases the risk that someone will go ahead and apply the guideline there without first investing the time and effort there in a reevaluation that follows the complete ApIC. Because each landscape is unique, borrowing management guidelines evaluated elsewhere may lead to unfortunate consequences (Feinsinger 2001, Sutherland et al. 2019). Instead, the original investigators completing their ApIC reflect on time: the unknown future of their site. How might the relative impact and effectiveness of the different guidelines just examined vary with future changes in, for example, climate, land use, markets for natural or agricultural products, kinds and intensities of disturbance, predation pressure, hunting pressure, pests, parasites, diseases, and the array and abundance of alien species? Reflection concludes by deciding which alternative management guideline of those examined holds the most promise. Based principally on the field study's results, this decision is also sensitive to those possible future changes in local conditions and the chance that at some point a different guideline could be the best choice. The ApIC's fourth step (figure 2b) consists in applying the guideline just chosen and to monitor the results from that moment on. The arrow connecting the application and question steps of figure 2b signifies that any notable change in the effect and effectiveness of the guideline should generate new research questions and ApICs whose results might lead to modifications in the current guideline or to entirely different guidelines. This adaptive management process (Holling 1978) responds most quickly to sudden changes in local conditions when monitoring (figure 2b) takes the form of a new, rigorously designed ApIC addressing the generic question: “How does Y vary among comparison units where the guideline selected originally was applied, units where alternative guidelines were applied, and ‘control’ units where none was applied?” (Feinsinger and Ventosa Rodríguez 2014; supplemental materials files S1 and S4). The Matsés park guards were aware of the risks of using results from a 3-month study (box 3) to implement a management guideline and simply leave that in place for the long term. They planned to design a second ApIC to begin at the moment of applying the guideline resulting from their first, 3-month ApIC, whether that guideline was to expand the scope of controlled harvesting of irapay fronds or to cease controlling. For the long-term monitoring study, the guideline selected would be applied to the majority of the irapay patches but other well-replicated treatments would simultaneously be applied to other patches thoroughly interspersed over the landscape involved. The park guards proposing and answering the research question in box 3 came not only from the same Matsés communities but also from the same families as the harvesters of irapay fronds. The harvesters would clearly benefit from whichever management guideline best ensured the long-term sustainability of that resource. In our experience the ApIC often functions best when the persons conceiving and accomplishing solution-driven research are, or are companions of, those persons who will not only implement the management guideline selected but will also enforce it or benefit directly from it (supplemental materials file S4; see Feinsinger et al. 2010). This holds true whether the local concern involves natural resource management—for example, the sustainable harvest of irapay fronds or of fish (the local concern of other Matsés park guards)—or if it addresses themes in other environmental sciences, such as local socioecological or agroecological concerns. In contrast to local solutions for local problems that ideally involve local researchers, multidisciplinary conservation projects or sustainable use projects usually involve much larger scales and investigators coming from outside. As the citation leading off this subsection points out, much of the applied research they perform is problem-driven, which actually follows the IC. It evaluates potential and actual effects of anthropogenic perturbation without the follow-up of field testing, implementing, and monitoring practical solutions as in the ApIC (supplemental materials file S4). In the extensive landscapes these projects address, the biological, ecological, and socioecological features—the last including land use and stakeholder groups—vary notably from one point to another. The spatial scale alone poses difficulties to achieving a rigorous, replicated study design for any solution-based research (Davies and Gray 2015). Furthermore, these projects’ efforts to facilitate decision-making on management guidelines encounter many challenges and complexities, in part because of the widely differing viewpoints among local actors and the resulting difficulties of productive interaction and communication among them and in part because of powerful economic and political drivers that are incompatible with management for conservation or sustainability (Cash et al. 2006, Ostrom 2009, Arlettaz et al. 2010, Lang et al. 2012, Domptail et al. 2013, Catalano et al. 2019, Putz 2020, Zeng et al. 2020). That scale and its complexities (García 2011), however, are well outside the scope of the present article, just as the scale addressed by Sutherland and colleagues (2019) is well within it. As we stated at the outset, our goal has been to introduce researchers in the environmental sciences to the IC and ApIC as tools for place-based, local-scale field investigations whether the intent is to know more or to manage better, respectively. As boxes 1 and 3 demonstrate, use of these tools is not by any means restricted to professional researchers. Providing different coteries of local stakeholders with the same tools, the philosophy and complete framework of the ApIC, and then encouraging autonomous research on local-scale concerns has great potential for energizing and empowering each of those coteries while enhancing understanding and communication between the different groups. We propose that numerous local-scale investigations carried out by different stakeholders using the ApIC (Feinsinger et al. 2010) could contribute substantially to the success and sustainability of landscape-scale multidisciplinary efforts, helping to bridge the frustrating gaps between research, policy, and local practice (see Putz 2020). Exploring this possibility further, however, is also well outside the scope of this essay. Conclusions Most environmental scientists would probably agree with Noss (1996), Lindenmayer and Likens (2011), and Ríos-Saldaña and colleagues (2018) that the time for empirical, place-based field studies has not passed by any means nor will it ever pass even in the well-studied landscapes of First World nations, let alone in the complex and diverse third-world landscapes where “knowledge gaps” (Elliott et al. 2016) are orders of magnitude larger and more common. If the sequence in figure 1 (research question, research design, data collection, data analysis, and critical review of data followed by creative but careful extrapolation from them) adequately summarizes the crucial phases of place-based empirical field investigations, then we hope to have shown that the IC and ApIC provide coherent and detailed research frameworks for those. Language matters We have also argued that environmental scientists and others should choose their words carefully in the various steps of figure 1. The fuzzy definition and inconsistent use of the term sampling unit appear to have led many researchers to carry out studies and publish papers having unintentional but serious flaws in research design, data analysis, and interpretation. In addition, inconsistent use of the terms experiment and experimental (experimental design, experimental treatment, experimental unit) may also have misled many environmental scientists and others. We suggest that field researchers restrict those terms, as we have done in the present article, to unquestionably controlled, randomized experiments as defined by Mead (1988). “Mensurative experiments” (Hurlbert 1984) and “natural experiments” (Shadish et al. 2002, Davies and Gray 2015), for example, are not experiments; they are observational studies. Avoiding the words experiment and experimental in nonexperimental field studies should reduce the risk that researchers will unconsciously assume that what they compare (X) is unquestionably the causal factor behind the observed variation in what they measure (Y). Even the term pseudoreplication should be used with more caution. Comparison units that cannot be interspersed cease being pseudoreplicates once the researcher phrases the research question to recognize explicitly the spatial or temporal axes that segregate the units. Finally, the verbs employed in figure 1’s fifth step (reflection in the IC and ApIC) should be chosen with great care so that readers (and authors) will not mistake speculations for affirmations. The IC, the ApIC, and data-intensive studies Ríos-Saldaña and colleagues (2018) emphasize that local-scale investigations carried out by field scientists provide the raw material for the increasingly common broad-scale, data-intensive analyses carried out by others (Carmel et al. 2013, Elliott et al. 2016, Connolly et al. 2017, Gurevitch et al. 2018, McCallen et al. 2019). Minutely and critically examining the research design and data analysis of each and every field study incorporated into a synthetic analysis would be a difficult task for its authors. Still, we cannot help but wonder how many of the field studies already incorporated into data-intensive analyses have unintentionally involved segregated comparison units, have confused evaluation units with comparison units, or have applied statistical analyses to subsamples as if they were independent comparison units. Can the environmental sciences rely on the conclusions of data-intensive studies? Only if the authors of those studies can vouch for the quality of the research designs and analyses of the empirical investigations they incorporate. In short, at least some key elements of the IC and the ApIC research frameworks might usefully be employed by colleagues engaged in data-intensive analyses incorporating empirical studies that others have designed, carried out, analyzed, and published. The IC and ApIC in other contexts This article stresses the utility of the IC and ApIC as frameworks for field research in those approaches where the authors have the most experience. The two frameworks can be applied with little or no modification, however, to socioecology and to the social sciences in general. Shadish and colleagues (2002), for example, thoroughly discuss designs of X (what to compare) in social science studies, whereas Colton and Covert (2007) and Saris and Gallhofer (2007) thoroughly discuss Y (what to measure and how to measure it) when people—not jaguars or forest patches—are the comparison units. For those conservation biologists and other environmental scientists who lack formal training in the social sciences but wish to survey perceptions, opinions, or knowledge among local people we suggest that they first read Colton and Covert (2007) from the preface onward and then read Martin (2020). On the other hand, throughout Latin America local researchers without formal academic training, such as park guards, peasant farmers, and K–12 schoolteachers and their students successfully apply the IC and ApIC to place-based curiosity and concerns regarding all aspects of the environmental sciences (Feinsinger et al. 2010). “Scientists are interested in empirical fact regardless of its relationship to their preconceptions.” —Kimmel (1957: p. 352) Why research questions of the IC and ApIC are nondirectional If field investigators in the environmental sciences really want to know more about what goes on in their study systems, either for curiosity's sake or to seek solutions, we should be open to natural history's surprises on one hand and to scientific objectivity on the other. We should not consciously or unconsciously expect research results to conform to the current paradigm. As many have pointed out (e.g., Quinn and Dunham 1983, Loehle 1987, Glass 2010, 2014), environmental scientists and others have been misled by blind adherence to one interpretation of the Popperian hypothetico-deductive method (Popper 1959), where directional, global-scale scientific hypotheses generate directional predictions to be tested at the local scale. In principle, results that fail to confirm expectations serve only to falsify that prediction and in turn the scientific hypothesis. In principle such results play no role in generating new insights or new ideas. If reproductive success of white ibis in monospecific nesting colonies greatly exceeds reproductive success in mixed-species colonies (box 1) or the density of vegetative propagules and seedlings is conspicuously lower in irapay patches with controlled harvesting of fronds than in patches without control (box 3), do these results indicate failure because they did not support preconceptions? We submit that instead they would be highly interesting and important, likely to generate entirely new research directions (box 1) or to steer management guidelines in a counterintuitive direction (box 3). Lombardi and Hurlbert (2009) point out that the use of one-tailed statistical tests has only exacerbated the inattention traditionally paid to results that run contrary to preconceptions. Fortunately, the use of one-tailed statistical tests is in decline (Lombardi and Hurlbert 2009), as is unquestioning obedience to the Popperian hypothetico-deductive method (e.g., Pickett et al. 2007, Elliott et al. 2016, Connolly et al. 2017). Still, the nonstatistical use of the word hypothesis is far from dead in the environmental sciences where, to make matters worse, researchers in different specialties often define the word and rules for its use quite differently (Donovan et al. 2015). Hypothetico-deductive approaches of any sort rarely if ever play a role in solution-driven science (Groom et al. 2006) where study results matter to decisions on local management guidelines whether or not data conform to the expectations of scientific hypotheses or research scientists. Whether engaged in theory-, curiosity-, or solution-driven science, field environmental scientists will learn as much or more from unexpected answers to nondirectional research questions than from yes or no answers to directional predictions (cf. Glass 2010). Box 3. The solution-driven research question proposed by local park guards in the Matsés National Reserve in Amazonian Perú (adapted from Feinsinger and Ventosa Rodríguez 2014). The results would contribute to the decision made by the Matsés communities on whether to continue investing their park guards’ time and effort in promoting and overseeing controlled harvesting of the fronds of irapay (the understory palm Lepidocaryum gracile) or to allow open, unsupervised harvesting. Once that decision was made, a second, long-term study would continue evaluating the effects of the guideline chosen and those of alternative choices (see the text). Observation The Matsés National Reserve and its buffer zone support extensive patches of irapay, where it propagates by rhizomes and by seeds. Members of Matsés communities harvest irapay fronds to thatch their homes and to sell in village markets. Reserve personnel manage some irapay patches for controlled harvesting but not others. Conceptual construct In general, indiscriminate harvesting of any palm's fronds negatively affects the health of individuals and, in consequence, fruit production (hence seedlings) and production of vegetative propagules. Strictly enforced, controlled harvesting in protected areas can serve to lessen these negative effects. Place-based concern Does controlled harvesting of fronds in the Matsés National Reserve significantly improve the health of irapay stands with respect to that of irapay stands harvested indiscriminately? Research question Between May and July 2013, in the periphery of the Matsés National Reserve how does the density of vegetative propagules and seedlings between 40 and 100 centimeters in height vary between irapay patches that have a long history of supervised, controlled harvesting and nearby patches with open, unsupervised harvesting? Finally, directional hypotheses and predictions do not tend to cultivate humility (Loehle 1987, Farji-Brener 2009, Glass 2010, Nuzzo 2015). We agree with these authors that humility is a desirable and productive trait for all researchers, in and out of the environmental sciences. Schwartz (2008) put it best: “The crucial lesson was that the scope of things I didn't know wasn't merely vast; it was, for all practical purposes, infinite. That realization, instead of being discouraging, was liberating. If our ignorance is infinite, the only possible course of action is to muddle through as best we can.” The IC and ApIC provide environmental scientists truly interested in empirical fact with the means to muddle through the best they can. Acknowledgments Colleagues too numerous to name, a significant proportion of the thousands of participants in our courses and workshops, have contributed to the birth, development, and maturation of the IC and ApIC. The Wildlife Conservation Society supported many of the courses taught by the first two authors and sponsored the publication of texts in Spanish. We thank Christopher B. Anderson, Robert K. Colwell, Harry W. Greene, Yrjö Haila, Marc Mangel, Francis E. Putz, Mary V. Price, Thomas G. Whitham, and two anonymous reviewers for their insightful comments on earlier drafts and Scott Collins for his encouragement. Peter Feinsinger (peter.feinsinger@nau.edu) and Iralys Ventosa Rodríguez are affiliated with the Wildlife Conservation Society, in Bronx, New York. PF is also affiliated with the Department of Biological Sciences at Northern Arizona University, in Flagstaff, Arizona. Andrea E. Izquierdo is a researcher of CONICET affiliated with Instituto de Ecología Regional of the Universidad Nacional de Tucumán, Argentina. Silvana Buzato is affiliated with the Departamento de Ecologia, Instituto de Biociências at the Universidade de São Paulo, in São Paulo, Brazil. All of the authors are affiliated with the Centro de Estudios y Aplicación del Ciclo de Indagación, Facultad de Ciencias Naturales e Instituto Miguel Lillo at the Universidad Nacional de Tucumán, in Tucumán, Argentina. References cited Altieri M. 1995 . Agroecology: The Science of Sustainable Agriculture , 2nd ed. Westview Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Anderson CB , Pizarro JC, Estévez R, Sapoznikow A, Pauchard A, Barbosa O, Moreira-Muñoz A, Valenzuela AEJ. 2015 . ¿Estamos avanzando hacia una socio-ecología? Reflexiones sobre la integración de las dimensiones “humanas” en la ecología en el sur de América . Ecología Austral 25 : 263 – 272 . Google Scholar Crossref Search ADS WorldCat Arlettaz R , Schaub M, Fournier J, Reichlin TS, Sierro A, Watson JEM, Braunisch V. 2010 . From publications to public actions: When conservation biologists bridge the gap between research and implementation . BioScience 60 : 835 – 842 . Google Scholar Crossref Search ADS WorldCat Barlow J , Mestre LAM, Gardner T, Peres CA. 2007 . The value of primary, secondary and plantation forests for Amazonian birds . Biological Conservation 136 : 212 – 231 . Google Scholar Crossref Search ADS WorldCat Billick I , Price MV, eds. 2010 . The Ecology of Place: Contributions of Place-Based Research to Ecological Understanding . University of Chicago Press . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Boitani L , Fuller T, eds. 2000 . Research Techniques in Animal Ecology: Controversies and Consequences . Columbia University Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Carmel Y , Kenet R, Bar-Massada A, Blank L, Liberzon J, Nezer O, Sapir G, Federman R. 2013 . Trends in ecological research during the last three decades: A systematic review . PLOS ONE 8 : e59813 . Google Scholar Crossref Search ADS PubMed WorldCat Cash DW , Adger WN, Berkes F, Garden P, Lebel L, Olsson P, Pritchard L, Young O. 2006 . Scale and cross-scale dynamics: Governance and information in a multilevel world . Ecology and Society 11 : 8 . www.ecologyandsociety.org/vol11/iss2/art8 . Google Scholar Crossref Search ADS WorldCat Catalano AS , Lyons-White J, Mills MM, Knight AT. 2019 . Learning from published project failures in conservation . Biological Conservation 238 : 108223 . Google Scholar Crossref Search ADS WorldCat Colegrave N , Ruxton GD. 2018 . Using biological insight and pragmatism when thinking about pseudoreplication . Trends in Ecology and Evolution 33 : 28 – 35 . Google Scholar Crossref Search ADS PubMed WorldCat Colton D , Covert RW. 2007 . Designing and Constructing Instruments for Social Research and Evaluation . Wiley . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Connell JH. 1978 . Diversity in tropical rain forests and coral reefs . Science 199 : 1302 – 1310 . Google Scholar Crossref Search ADS PubMed WorldCat Connolly J , Keith SA, Colwell RK, Rahbek C. 2017 . Process, mechanism, and modeling in macroecology . Trends in Ecology and Evolution 32 : 835 – 844 . Google Scholar Crossref Search ADS PubMed WorldCat Davies GM , Gray A. 2015 . Don't let spurious accusations of pseudoreplication limit our ability to learn from natural experiments (and other messy kinds of ecological monitoring) . Ecology and Evolution 5 : 5295 – 5304 . Google Scholar Crossref Search ADS PubMed WorldCat Domptail S , Easdale MH, Yuerlita. 2013 . Managing socioecological systems to achieve sustainability: A study of resilience and robustness . Environmental Policy and Governance 23 : 30 – 45 . Google Scholar Crossref Search ADS WorldCat Donovan SM , O'Rourke M, Looney C. 2015 . Your hypothesis or mine? Terminological and conceptual variation across disciplines . Sage Open 5 : 2158244015586237 . https://doi.org/10.1177/2158244015586237 . Google Scholar Crossref Search ADS WorldCat Eigenbrode SD et al. 2007 . Employing philosophical dialogue in collaborative science . BioScience 57 : 55 – 64 . Google Scholar Crossref Search ADS WorldCat Elliott KC , Cheruvelil KS, Montgomery GM, Soranno PA. 2016 . Conceptions of good science in our data-rich world . BioScience 66 : 880 – 889 . Google Scholar Crossref Search ADS PubMed WorldCat Farji-Brener AG. 2009 . ¿Ecólogos o ególogos? Cuando las ideas someten a los datos . Ecología Austral 19 : 167 – 172 . Google Scholar OpenURL Placeholder Text WorldCat Feinsinger P. 2001 . Designing Field Studies for Biodiversity Conservation . Island Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Feinsinger P. 2004 . El Diseño de Estudios de Campo para la Conservación de la Biodiversidad . FAN-Bolivia . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Feinsinger P. 2012 . Lo que es, lo que podría ser y el análisis e interpretación de los datos en un estudio de campo . Ecología en Bolivia 47 : 1 – 6 . Google Scholar OpenURL Placeholder Text WorldCat Feinsinger P. 2013 . Metodologías de investigación en ecología aplicada y básica: ¿Cuál estoy siguiendo, y por qué? Revista Chilena de Historia Natural 86 : 385 – 402 . Google Scholar Crossref Search ADS WorldCat Feinsinger P. 2014 . Metodologías de investigación en ecología aplicada y básica en los “sitios de estudios socio-ecológicos a largo plazo” y mucho más allá: El Ciclo de Indagación . Bosque 35 : 449 – 457 . Google Scholar Crossref Search ADS WorldCat Feinsinger P , Ventosa Rodríguez I. 2014 . Suplemento Decenal al Texto “El Diseño de Estudios de Campo para la Conservación de la Biodiversidad .” FAN-Bolivia . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Feinsinger P , Margutti L, Oviedo RD. 1997 . School yards and nature trails: Ecology education outside the university . Trends in Ecology and Evolution 12 : 115 – 120 . Google Scholar Crossref Search ADS PubMed WorldCat Feinsinger P , Pozzi C, Trucco C, Cuellar RL, Laina A, Cañizares M, Noss A. 2010 . Investigación, conservación y los espacios protegidos de América latina: Una historia incompleta . Revista Ecosistemas 19 : 97 – 111 . Google Scholar OpenURL Placeholder Text WorldCat Fischer J , Hanspach J, Hartel T. 2011 . Continental-scale ecology versus landscape-scale case studies . Frontiers in Ecology and the Environment 9 : 430 . Google Scholar Crossref Search ADS WorldCat Fischer J , Ritchie EG, Hanspach J. 2012 . Academia's obsession with quantity . Trends in Ecology and Evolution 27 : 473 – 474 . Google Scholar Crossref Search ADS PubMed WorldCat Ford ED. 2000 . Scientific Method for Ecological Research . Cambridge University Press . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC García R. 2011 . Interdisciplinareidad y sistemas complejos . Revista Latinoamericana de Metodología de las Ciencias Sociales 1 : 66 – 101 . Google Scholar OpenURL Placeholder Text WorldCat Gilbert F. 2011 . Book review: The Ecology of Place . Frontiers of Biogeography 3 : 26 – 28 . Glass DJ. 2010 . A critique of the hypothesis, and a defense of the question, as a framework for experimentation . Clinical Chemistry 56 : 1080 – 1085 . Google Scholar Crossref Search ADS PubMed WorldCat Glass DJ. 2014 . NIH grants: Focus on questions, not hypotheses . Nature 507 : 306 . Google Scholar Crossref Search ADS PubMed WorldCat Graham MH , Dayton PK. 2002 . On the evolution of ecological ideas: Paradigms and scientific progress . Ecology 83 : 1481 – 1489 . Google Scholar Crossref Search ADS WorldCat Grant PR , Grant BR. 2010 . Ecological insights into the causes of an adaptive radiation from long-term field studies of Darwin's finches . Pages 109 – 133 in Billick I, Price MV, eds. The Ecology of Place: Contributions of Place-Based Research to Ecological Understanding . University of Chicago Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Groom MJ , Meffe GK, Carroll CR. 2006 . Principles of Conservation Biology, 3rd ed . Sinauer. Google Scholar OpenURL Placeholder Text WorldCat Guerrero AM , McAllister RRJ, Corcoran J, Wilson KA. 2013 . Scale mismatches, conservation planning, and the value of social-network analyses . Conservation Biology 27 : 35 – 44 . Google Scholar Crossref Search ADS PubMed WorldCat Gurevitch J , Koricheva J, Nakagawa S, Stewart G. 2018 . Meta-analysis and the science of research synthesis . Nature 555 : 175 – 182 . Google Scholar Crossref Search ADS PubMed WorldCat Holling CS. 1978 . Adaptive Environmental Assessment and Management . Wiley . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Hurlbert SH. 1984 . Pseudoreplication and the design of ecological field experiments . Ecological Monographs 54 : 187 – 211 . Google Scholar Crossref Search ADS WorldCat Hurlbert SH. 1990 . Pastor binocularis: Now we have no excuse . Ecology 71 : 1222 – 1228 . Google Scholar Crossref Search ADS WorldCat Hurlbert SH. 2009 . The ancient black art and transdisciplinary extent of pseudoreplication . Journal of Comparative Psychology 123 : 434 – 443 . Google Scholar Crossref Search ADS PubMed WorldCat Hurlbert SH. , Meikle WG. 2003 . Pseudoreplication, fungi, and locusts . Journal of Economic Entomology 96 : 533 – 535 . Google Scholar Crossref Search ADS PubMed WorldCat Hurlbert SH , White MD. 1993 . Experiments with freshwater invertebrate zooplanktivores: Quality of statistical analyses . Bulletin of Marine Science 53 : 128 – 153 . Google Scholar OpenURL Placeholder Text WorldCat Johnson DH. 1999 . The insignificance of statistical significance testing . Journal of Wildlife Management 63 : 763 – 772 . Google Scholar Crossref Search ADS WorldCat Kimmel HD. 1957 . Three criteria for the use of one-tailed tests . Psychological Bulletin 54 : 351 – 353 . Google Scholar Crossref Search ADS PubMed WorldCat Krebs CJ. 2000 . Hypothesis testing in ecology . Pages 1 – 14 in Boitani L, Fuller T, eds. Research Techniques in Animal Ecology: Controversies and Consequences . Columbia University Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Krebs CJ. 1998 . Ecological Methodology , 2nd ed. Benjamin Cummings . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Lang DJ , Wiek A, Bergmann J, Stauffacher M, Martens P, Moll P, Swilling M, Thomas CJ. 2012 . Transdisciplinary research in sustainability science: Practice, principles, and challenges . Sustainability Science 7 ( supplement 1 ): 25 – 43 . Google Scholar Crossref Search ADS WorldCat Lawton JH. 1996 . Corncrake pie and prediction in ecology . Oikos 76 : 3 – 4 . Google Scholar Crossref Search ADS WorldCat Lawton JH. 1999 . Are there general laws in ecology? Oikos 84 : 177 – 192 . Google Scholar Crossref Search ADS WorldCat Lindenmayer DB , Likens GE. 2011 . Losing the culture of ecology . Bulletin of the Ecological Society of America 92 : 245 – 246 . Google Scholar Crossref Search ADS WorldCat Loehle C. 1987 . Hypothesis testing in ecology: Psychological aspects and the importance of theory . Quarterly Review of Biology 62 : 397 – 409 . Google Scholar Crossref Search ADS WorldCat Lombardi CM , Hurlbert SH. 2009 . Misprescription and misuse of one-tailed tests . Austral Ecology 34 : 447 – 468 . Google Scholar OpenURL Placeholder Text WorldCat Manly BFJ. 1992 . The Design and Analysis of Research Studies . Cambridge University Press . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Martin V. 2020 . Four common problems in environmental social research undertaken by natural scientists . BioScience 70 : 13 – 16 . Google Scholar Crossref Search ADS WorldCat Martínez ML , Manson RH, Balvanera P, Dirzo R, Soberón J, García-Barrios L, Martínez-Ramos M, Moreno-Casasola P, Rosenzweig L, Sarukhan J. 2006 . The evolution of ecology in Mexico: Facing challenges and preparing for the future . Frontiers in Ecology and Environment 4 : 259 – 267 . Google Scholar Crossref Search ADS WorldCat Matejka J , Fitzmaurice G. 2017 . Same stats, different graphs: Generating data sets with varied appearance and identical statistics through simulated annealing . Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems: 1290–1294 . McCallen E , Knott J, Núñez-Mir G, Taylor G, Jo I, Fei S. 2019 . Trends in ecology: Shifts in ecological research themes over the past four decades . Frontiers in Ecology and the Environment 17 : 109 – 116 . Google Scholar Crossref Search ADS WorldCat McDade LA , Bawa KS, Hespenheide HA, Hartshorn GS. 1994 . La Selva: Ecology and Natural History of a Neotropical Rain Forest . University of Chicago Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Mead R. 1988 . The Design of Experiments . Cambridge University Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Molino J-F , Sabatier D. 2001 . Tree diversity in tropical rain forests: A validation of the intermediate disturbance hypothesis . Science 294 : 1702 – 1704 . Google Scholar Crossref Search ADS PubMed WorldCat Monjeau A , Rau JR, Anderson CB. 2015 . El síndrome del factor de impacto y la ética ambiental en América Latina: ¿Ha llegado el tiempo de la insurrección? Cuadernos de Ética 30 : 1 – 23 . Google Scholar OpenURL Placeholder Text WorldCat Morrison ML , Block WM, Strickland MD, Kendall WL. 2001 . Wildlife Study Design . Springer . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Nadkarni NM , Wheelwright NT, eds. 2000 . Monteverde: Ecology and Conservation of a Tropical Cloud Forest . Oxford University Press . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Noss RF. 1996 . The naturalists are dying off . Conservation Biology 10 : 1 – 3 . Google Scholar Crossref Search ADS WorldCat Nuzzo R. 2015 . How scientists fool themselves: And how they can stop . Nature 526 : 182 – 185 . Google Scholar Crossref Search ADS PubMed WorldCat Ostrom E. 2009 . A general framework for analyzing sustainability of social–ecological systems . Science 325 : 419 – 422 . Google Scholar Crossref Search ADS PubMed WorldCat Pedaste M , Mäetos M, Silman LA, de Jong T, van Riesen SAN, Kamp ET, Manoli CC, Zacharia ZC, Tsourlidaki E. 2015 . Phases of inquiry-based learning: Definitions and the inquiry cycle . Educational Research Review 14 : 47 – 61 . Google Scholar Crossref Search ADS WorldCat Peters SL , Malcolm JR, Zimmerman BL. 2006 . Effects of selective logging on bat communities in the southeastern Amazon . Conservation Biology 20 : 1410 – 1421 . Google Scholar Crossref Search ADS PubMed WorldCat Pickett STA , Kolasa J, Jones CG. 2007 . Ecological Understanding: The Nature of Theory and the Theory of Nature , 2nd ed. Elsevier . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Pinker S. 2014 . The Sense of Style: The Thinking Person's Guide to Writing in the 21st Century . Penguin . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Popper K. 1959 . The Logic of Scientific Discovery . Basic Books . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Price MV , Billick I. 2010 . The imprint of place on ecology and ecologists . Pages 11 – 14 in Billick I, Price MV, eds. The Ecology of Place: Contributions of Place-Based Research to Ecological Understanding . University of Chicago Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Putz FE. 2020 . Science-to-conservation disconnections in Borneo and British Columbia . Forestry Chronicle 96 : 20 – 26 . Google Scholar Crossref Search ADS WorldCat Quinn GP , Keough MJ. 2002 . Experimental Design and Data Analysis for Biologists . Cambridge University Press . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Quinn JF , Dunham AE. 1983 . On hypothesis testing in ecology and evolution . American Naturalist 122 : 602 – 617 . Google Scholar Crossref Search ADS WorldCat Ramage BS et al. 2012 . Pseudoreplication in tropical forests and the resulting effects on biodiversity conservation . Conservation Biology 27 : 364 – 372 . Google Scholar Crossref Search ADS WorldCat Ricklefs RE. 2012 . Naturalists, natural history and the nature of biological diversity . American Naturalist 179 : 423 – 435 . Google Scholar Crossref Search ADS WorldCat Ríos-Saldaña CA , Delibes-Mateos M, Ferreira CC. 2018 . Are fieldwork studies being relegated to second place in conservation science? Global Ecology and Conservation 14 : e00389 . Google Scholar Crossref Search ADS WorldCat Ripple WJ , Wolf C, Newsome TM, Galetti M, Alamgir M, Crist E, Mahmoud MI, Laurance WF. 2017 . World scientists’ warning to humanity: A second notice . BioScience 67 : 1026 – 1028 . Google Scholar Crossref Search ADS WorldCat Rodríguez S. 1840 . Sociedades Americanas en 1828. Primera Parte . Luces y Virtudes Sociales. Imprenta del Mercurio . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Saris WE , Gallhofer IN. 2007 . Design, Evaluation, and Analysis of Questionnaires for Survey Research . Wiley . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Scheiner S , Gurevitch J, eds. 2001 . Design and Analysis of Ecological Experiments , 2nd ed. Oxford University Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Schwartz MA. 2008 . The importance of stupidity in scientific research . Journal of Cell Science 121 : 1771 . Google Scholar Crossref Search ADS PubMed WorldCat Sevillano L , Horvitz CC, Pratt PD. 2010 . Natural enemy density and soil type influence growth and survival of Melaleuca quinquenervia seedlings . Biological Control 53 : 168 – 177 . Google Scholar Crossref Search ADS WorldCat Shadish W , Cook T, Campbell D. 2002 . Experimental and Quasi-Experimental Designs for Generalized Causal Inference . Houghton Mifflin . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Spurgeon DW. 2019 . Common statistical mistakes in entomology: Pseudoreplication . American Entomologist 65 : 16 – 18 . Google Scholar Crossref Search ADS WorldCat Stokes DE. 1997 . Pasteur's Quadrant: Basic Science and Technological Innovation . Brookings Institution Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Sutherland WJ , ed. 2006 . Ecological Census Techniques: A Handbook . Cambridge University Press . Google Scholar Crossref Search ADS Google Preview WorldCat COPAC Sutherland WJ , Dicks LV, Ockendon N, Petrovan SO, Smith RK, eds. 2019 . What Works in Conservation . Open Book Publishers . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Tewksbury JJ et al. 2014 . Natural history's place in science and society . BioScience 64 : 300 – 310 . Google Scholar Crossref Search ADS WorldCat Townsend CR , Scarsbrook MR, Dolédec S. 1997 . The intermediate disturbance hypothesis, refugia, and biodiversity in streams . Limnology and Oceanography 42 : 938 – 949 . Google Scholar Crossref Search ADS WorldCat Valiela I. 2001 . Doing Science: Design, Analysis, and Communication of Scientific Research . Oxford University Press . Google Scholar Google Preview OpenURL Placeholder Text WorldCat COPAC Weissgerber TL , Milic NM, Winham SJ, Garovic VD. 2015 . Beyond bar and line graphs: Time for a new data presentation paradigm . PLOS Biology 13 : e1002128 . Google Scholar Crossref Search ADS PubMed WorldCat Werling BP et al. 2014 . Perennial grasslands enhance biodiversity and multiple ecosystem services in bioenergy landscapes . Proceedings of the National Academy of Sciences 111 : 1652 – 1657 . Google Scholar Crossref Search ADS WorldCat Willson MF , Armesto JJ. 2006 . Is natural history really dead? Toward the rebirth of natural history . Revista Chilena de Historia Natural 79 : 279 – 283 . Google Scholar Crossref Search ADS WorldCat Zeng Y , Maxwell S, Runting RK, Venter O, Watson JEM, Carrasco LM. 2020 . Environmental destruction not avoided with the sustainable development goals . Nature Sustainability (2020) . https://doi.org/10.1038/s41893-020-0555-0 . Google Scholar OpenURL Placeholder Text WorldCat © The Author(s) 2020. Published by Oxford University Press on behalf of the American Institute of Biological Sciences. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - The Inquiry Cycle and Applied Inquiry Cycle: Integrated Frameworks for Field Studies in the Environmental Sciences JO - BioScience DO - 10.1093/biosci/biaa108 DA - 2020-12-21 UR - https://www.deepdyve.com/lp/oxford-university-press/the-inquiry-cycle-and-applied-inquiry-cycle-integrated-frameworks-for-HoL1XoR6Ku SP - 1065 EP - 1081 VL - 70 IS - 12 DP - DeepDyve ER -