On the Possibility of Crucial Experiments in Biology

On the Possibility of Crucial Experiments in Biology Abstract The article analyses in detail the Meselson–Stahl experiment, identifying two novel difficulties for the crucial experiment account, namely, the fragility of the experimental results and the fact that the hypotheses under scrutiny were not mutually exclusive. The crucial experiment account is rejected in favour of an experimental-mechanistic account of the historical significance of the experiment, emphasizing that the experiment generated data about the biochemistry of DNA replication that is independent of the testing of the semi-conservative, conservative, and dispersive hypotheses. 1 Introduction 2 The Meselson–Stahl Experiment 3 Some Difficulties for the Crucial Experiment Account 3.1 Additional experiments were required 3.2 Simplicity considerations are unconvincing 3.3 The fragility of the experimental results 3.4 The problematic interpretation of falsifyng results 3.5 Not all possible hypotheses considered or tested 3.6 The tested hypotheses were not mutually exclusive 4 The Historical and Scientific Significance of the Meselson–Stahl Experiment 4.1 The challenge for the crucial experiment account 4.2 A mechanistic perspective on the experiment 4.3 The experimental design and its independence from theoretical speculations 4.4 The advantages of a ‘bottom-up’ mechanistic account 5 Conclusion 1 Introduction The modern notion of the crucial experiment emerged from the analysis of historical episodes where a single experiment seems to have conclusively sealed the fate of two or more competing hypotheses. The strategy behind such experiments hinges on the testing of the hypotheses under scrutiny relative to an aspect of empirical reality about which each of the competing parties makes a different prediction. The result can shift the balance in favour of the hypothesis making the correct prediction and against rivals that fail to do so. What differentiates crucial experiments from other kinds of experiments is that the former don’t only conclusively support a hypothesis, they also provide good reasons to reject rival hypotheses. As philosophers of science like to point out, things are not quite as simple. The most famous challenge to a straightforward interpretation of the results of a crucial experiment is the under-determination of scientific theory by evidence. One argument from under-determination is the problem of confirmation holism: Since the derivation of testable predictions usually requires additional auxiliary hypotheses, it is seldom possible to test a scientific hypothesis in isolation. As a result, falsifying results give no clear information about whether the tested or the auxiliary hypotheses are false (Duhem [1906], pp. 303–4). Moreover, by choosing a different set of auxiliary assumptions, it may be possible to save a hypothesis despite falsifying results (Quine [1951]; Putnam [1991]). A second argument from under-determination is the problem of unconceived alternatives: Even if the falsification of rival hypotheses is conclusive, this does not allow us to infer that the remaining hypothesis is true. A hypothesis cannot be confirmed against its rivals by means of crucial experiments because it cannot be ascertained that all possible alternatives have been considered (Duhem [1906], p. 311). A similar shortcoming plagues abductive attempts to infer that the explanation that best responds to a set of epistemic virtues is true or the most likely to be true: the best explanation may simply be the best of a bad lot of false explanations (van Fraassen [1989], p. 143). In response to these challenges, it has been argued that good science cannot rely on ad hoc assumptions whose sole purpose is to save hypotheses from being falsified (Lakatos [1970]). The argument from unconceived possibilities can be weakened by showing that it is sometimes possible to consider all possibilities, for instance, because only a very limited number of hypotheses can explain all empirical data while remaining compatible with accepted theoretical frameworks (Roush [2005], p. 15; Franklin [2007]). This also undermines the ‘bad lot’ argument, since these hypotheses are good hypotheses capable of explaining available data (Weber [2009]). Even if there could be other explanations that had not been taken into consideration, they would not be more theoretically accurate or plausible than the ones already under consideration. Furthermore, a combination of evidence and simplicity considerations can favour a hypothesis against its rivals. For instance, if the hypothesis favoured by the experimental results is also sufficient to explain a phenomenon without introducing additional assumptions, then we have no reasons to prefer a rival that by itself is incapable of accounting for the same results without adding further ‘epicycles’ to the explanatory story (Weber [2009]). In light of such considerations, some authors—most notably Allan Franklin ([2007]), Sherrilyn Roush ([2005]), and Marcel Weber ([2009])—proceed to argue that there are at least some unquestionable examples of successful crucial experiments in the history of science, of which the Meselson–Stahl experiment stands out as one of the most striking illustrations. In this article, I re-examine the Meselson–Stahl experiment (described in Section 2), identifying several difficulties for the crucial experiment account (Section 3). My critique is not based on the unconceived possibilities and the bad lot arguments, which I take to be successfully defused by Franklin, Roush, and Weber. I do think that Duhem was right in worrying about confirmation holism, and I identify several auxiliary assumptions that troubled scientists at the time the experiment was conducted. However, my analysis also shows that most auxiliary hypotheses have been independently tested, meaning that the difficulties raised by confirmation holism are not insurmountable. Hence, my arguments are not based on the premise that issues related to confirmation holism are intractable or that they have been left unsolved. I am instead concerned with the retrospective reconstruction objection (Lakatos [1970]; Hacking [1983]); in this case, the fact that distinct auxiliary hypotheses required testing over an extended period of time, which undermines both the epistemic claim that a single set of experimental results supported one hypothesis while rejecting all others, as well as the historical claim that the fate of the tested hypotheses was conclusively decided at the time when the results of the experiment were published. I elaborate in more detail the issue of auxiliary assumptions, making a case for the fragility of the experimental results by showing how slight variations of the experimental procedures would have failed to replicate the results. Another set of difficulties target the possibility of designing crucial experiments in biology. I argue that even if it is plausible to assume that all possible hypotheses could have been listed, this does not mean that all these possibilities could have been simultaneously tested. Moreover, there were (and still are) no reasons to believe that alternate mechanistic explanations in biology are mutually exclusive. This leads to a combinatorial increase in the number of possible explanations, as well as to an inability to leverage a rejection of rival hypotheses given support for the leading hypothesis. In these respects, the Meselson–Stahl experiment was quite different from paradigmatic examples of crucial experiments from physics. The critical assessment of the crucial experiment account is followed by an attempt to explain the historical significance of the experiment based on the interpretation of the experimental results Meselson and Stahl presented to the scientific community in the conclusion of their 1958 paper (Section 4). I propose that the experiment was part of a broader research project aiming to elucidate the mechanisms of DNA replication. From the perspective of this project, the historical significance of the Meselson–Stahl experiment had little to do with the expectations raised by the crucial experiment account. The primary goal of the experiment was to provide information about the transfer of atoms from parent to daughter DNA, information in turn required to gain further insights into the chemical reactions involved in DNA synthesis. This objective could have been achieved irrespective of the rejection of rival hypotheses, and some information would have been gained even if none of the candidate hypotheses would have been conclusively supported. Finally, in the conclusion of the article (Section 5), I summarize my findings, drawing some general implications for philosophy of science. 2 The Meselson–Stahl Experiment The elucidation of the structure of DNA suggested the possibility of a semi-conservative mechanism of DNA replication (Watson and Crick [1953]): each of the two stands of the original DNA duplex serves as a template for the synthesis of a new complementary strand (Figure 1, left). Alternatively, the conservative hypothesis (Figure 1, centre) postulated that proteins bound to the DNA duplex distort it in such a way that both strands are exposed for hydrogen bonding and a new DNA duplex is synthesized (Bloch [1955]). According to the dispersive replication hypothesis (Figure 1, right), DNA is unwound into single strands, breaks are induced to eliminate supercoiling, complementary strands are then synthesized, and the segments are joined back together in two molecules of ‘patched’ duplex DNA, containing alternating pieces from the original and the newly synthesized DNA (Delbrück [1954]). Figure 1. View largeDownload slide The Meselson–Stahl experiment. Figure 1. View largeDownload slide The Meselson–Stahl experiment. Meselson and Stahl ([1958]) devised an experimental set-up that would allow them to distinguish between pre-existing parental DNA molecules and newly synthesized ones. They took advantage of the fact that it is possible to generate denser DNA species by feeding bacteria a nitrogen source containing the heavy 15N isotope, which is gradually incorporated into newly synthesized nitrogen-containing macromolecules such as DNA. Denser DNA containing the 15N isotope could then be separated from lighter DNA containing 14N by means of a density gradient sedimentation technique known as isopycnic centrifugation. Bacteria were grown in a heavy 15N medium for several generations. As a result, the DNA extracted from these bacteria was denser than light DNA containing only 14N and hybrid DNA containing a mixture of 14N and 15N. After transferring bacteria on a light 14N growth medium, the following approximate ratios between the percentage of light, hybrid, and heavy DNA were observed: 0:100:0 for the first generation; 50:50:0 for the second generation; 75:25:0 for the third generations; and 90:10:0 for the fourth generation. The observed pattern was consistent with a primarily semi-conservative mode of replication. If replication had been primarily conservative, only light and heavy DNA should have been expected (50:0:50 for the first generation, 75:0:25 for the second generation, and so on). If replication had been primarily dispersive, all of the DNA would have been of an intermediate density, becoming lighter and lighter with each generation. The above rendering is compatible with the claim that the experimental results supported the semi-conservative hypothesis against the conservative and dispersive alternatives (Roush [2005], pp. 14–6; Franklin [2007], pp. 236–42). A more complex version of this argument states that the semi-conservative hypothesis was positively selected as the best explanation via an abductive reasoning favouring the simplest explanation that would not require the introduction of additional assumptions about what happens during the experimental procedure (Weber [2009]). 3 Some Difficulties for the Crucial Experiment Account There are, however, reasons to doubt that the Meselson–Stahl experiment was a crucial experiment as typically understood by philosophers of science. There are reasons to doubt that the experimental results published in 1958 were sufficient to conclusively support the semi-conservative hypothesis (Sections 3.1–3.3) and to definitively refute the conservative and dispersive hypotheses (Section 3.4); in both cases, additional experiments were required. There are also reasons to doubt that the experimental design adopted by Meselson and Stahl aligns with typical examples of crucial experiments from physics, given that not all hypotheses available at the time were tested, and that the hypotheses considered were not mutually exclusive (Sections 3.5–3.6). 3.1 Additional experiments were required One concern amply discussed in the scientific literature at the time was that, depending on the configuration of the DNA molecules during the centrifugation stage of the experiment, the results were compatible with several hypotheses. Additional experiments by Meselson and Stahl showed that heat-denatured hybrid DNA (where the H-bonds between two strands of DNA are broken by heat) separates into two subunits, each with half the molecular weight of the whole molecule, but different densities. This result ruled out a primarily dispersive mechanism of replication. However, this result did not rule out a conservative mechanism. Meselson and Stahl explicitly acknowledge that the interpretation of the results depicted in Figure 1 relies on the assumption that the observed bands do not consist of end-to-end or laterally associated linear DNA fragments, as suggested by some proponents of conservative replication (Cavalieri et al. [1959]).1 It is only in conjunction with a series of subsequent experiments that the Meselson–Stahl experiment can conclusively show that DNA is primarily replicated in a semi-conservative fashion. In particular, double-labelling 13C/15N experiments showed that DNA fragments do not reassociate in an end-to-end configuration (Rolfe [1962]), while studies using 5-bromouracil (a nucleotide-like compound that is integrated in DNA and alters its pH) provided evidence that DNA double helices do not associate laterally in order to form quadruplex DNA (Baldwin and Shooter [1963]). Thus, despite its indisputable impact, the Meselson–Stahl experiment did not singlehandedly support the semi-conservative hypothesis, but required the assistance of additional experiments in order to do so. On one hand, the episode illustrates Duhem’s concern that the interpretation of the experimental results requires additional auxiliary assumptions about what happens during the experimental procedure. On the other hand, it also shows that it is possible to adequately address these issues related to confirmation holism by systematically testing auxiliary assumptions, providing conclusive evidence for or against a given hypothesis. However, two problems remain for the crucial experiment account. First, distinct hypotheses were independently tested, meaning that there was no single experimental result supporting one hypothesis against its rivals. Second, the testing of various auxiliary assumptions spanned another decade of research, undermining the historical significance of the results as understood on the crucial experiment account, namely, that the fate of the three tested hypotheses was conclusively decided in 1958, when the results were published. 3.2 Simplicity considerations are unconvincing In an attempt to solve the problem, Weber ([2009]) argues that the semi-conservative hypothesis was positively selected as the best explanation from the very beginning because it relied on the simplest interpretation of the results yielded by the experiment.2 Simplicity is defined here as the ability to account for a phenomenon or experimental result by relying on fewer auxiliary assumptions about what goes on during the experiment. As pointed out earlier, the conservative mechanism also could have explained the observed banding pattern in the Meselson–Stahl experiment. However, Weber argues, the latter is adequate only if we further assume, first, that DNA is sheared during the experiment and, second, that the resulting fragments reassociate. In contrast, the semi-conservative hypothesis is simpler because it can account for the observed results without any need for additional assumptions.3 Perhaps the immediate objection that comes to mind against this line of argument is that there is no record in the primary scientific literature supporting the claim that simplicity played any role in the acceptance of the semi-conservative hypothesis. On the contrary, a series of additional experiments aimed at testing auxiliary assumptions strongly indicates that simplicity itself was not taken to provide any substantial justification. Philip Hanawalt ([2004]) attributes the preference for the semi-conservative hypothesis to previous studies suggesting that each chromosome inherits one parental chromatid (Taylor et al. [1957]). However, the detection technique had insufficient resolution to discriminate between a partially dispersive and a fully semi-conservative mode replication, or to conclusively demonstrate that DNA (rather than proteins or other molecular components) were labelled in the experiment. If there is such a historical connection, then it is quite plausible that the unsolved issues of Taylor and colleagues' initial study was what motivated Meselson and Stahl to develop higher resolution techniques, thus engaging in what James Marcum ([2007]) calls an ‘experimental series’, whereby each experiment raises research questions, anomalies, data interpretation issues, or experimental difficulties that are then addressed by subsequent experiments. A second concern hinges on the fact that Meselson and Stahl did make further assumptions about what happened during the experiment, thus undermining the notion of simplicity on which Weber relies. Among other things, they assumed that the bands in the CsCl gradient consisted of whole linear DNA. The point can be further pressed given that this assumption was, in fact, false. DNA was indeed fragmented during the experiment, as proposed by Cavalieri and colleagues, but the fragments did not reassociate. Weber ([2009], p. 29) attempts to diffuse the difficulty by arguing that this false assumption didn’t make any difference for the interpretation of the results, because isopycnic centrifugation separates molecules by density not length. However, it is not the length of the fragments that causes the problem here. It was later shown that bacterial chromosomes are circular, not linear as assumed by Meselson and Stahl. Parent–daughter circular DNA molecules remain temporarily interlocked (concatenated), thus altering the proportion of hybrid DNA. Furthermore, in many experimental set-ups, linear, circular, and supercoiled DNAs have different densities, and therefore appear as distinct bands instead of the single band associated with linear DNA (Ts'o [1974]). 3.3 The fragility of the experimental results The issue of circular topologies demonstrates that the apparent simplicity of auxiliary assumptions does not guarantee that these assumptions are true, which is what is ultimately required in order to circumvent under-determination from confirmation holism. The issue also illustrates another complication. Fragmentation—which in the Meselson–Stahl experimental design was the unplanned side effect of the injection of DNA samples through a syringe—turned out to be essential for the success of the experiment. The flip side of this happy accident is that the experiment could just as easily been inconclusive. In the absence of thorough fragmentation, an unpredicted and less clean-cut pattern would have been observed due to the presence of many new DNA species, including concatenated parent–daughter DNA molecules, circular and supercoiled species, as well as more exotic bits such as replication forks (Davis [2004]; Hanawalt [2004]). This does not mean that researchers would not have learned something about the distributions of atoms from parent to daughter DNA, or that they would not have figured out the source of the problem eventually. Nevertheless, the episode demonstrates the fragility of the experimental design. Slight and seemingly irrelevant variations of the experimental design would have resulted in a failure to replicate the experimental results of Meselson and Stahl. In fact, many similar experiments did not replicate the results, but by that time much more was known about DNA topology, synthesis, and replication to make sense of the divergent results. The implication is that the extent to which experimental results robustly support a hypothesis depends in large measure on the ability to identify and test auxiliary assumptions. 3.4 The problematic interpretation of falsifying results A similar set of difficulties face the interpretation of the falsifying results. It is certainly true that the Meselson–Stahl and satellite experiments did not support the conservative and dispersive hypotheses. Nevertheless, these experiments were geared towards the detection of large amounts of DNA. The experiments conclusively favoured the semi-conservative hypothesis in the sense that DNA structures compatible with a semi-conservative mode of replication were detected. The non-detection of other kinds of structures was less conclusive, and this could have been attributed either to the fact that such structures do not exist or to the fact that they are too rare to be picked up by these experiments. The converse complication is also relevant. If the limited sensitivity of an experimental set-up entails an uncertainty about the correct interpretation of negative results, the detection of rare molecular structures is plagued by uncertainties about the biological relevance of these structures.4 The fact that large amounts of DNA structures compatible with the semi-conservative hypothesis were detected in dividing bacteria was taken as a clear indication that these structures are involved in DNA replication. This greatly simplified the interpretation of the experimental results, since there was no need to demonstrate the biological relevance of the detected DNA structures. That is, there was no need to show that the results were not experimental artefacts, and that the detected structures occur under physiological conditions and are specifically involved in DNA replication (as opposed to other biological activities, such as recombination and DNA repair). Yet such fortunate circumstances are quite exceptional in biology. Meselson and Stahl came close to facing a biological relevance problem when they considered the possibility that high recombination rates may complicate the interpretation of their experimental results by producing dispersive-like structures, which in turn would require further experiments to disentangle the relative contributions of replication and recombination. This realization played a role in their decision to work with Escherichia coli, rather than phages or eukaryotic cells, which proved to be an inspired choice (Holmes [2001]). Both difficulties are nonetheless illustrated by the fact that subsequent studies showed that recombination does in fact occur in E. coli and, as a result, dispersive-like DNA is produced in these cells. Furthermore, these DNA structures were discovered using techniques and an overall experimental design similar to that employed by Meselson and Stahl (Pettijohn and Hanawalt [1964]). To make the story even more complicated, ‘bacterial recombination’ turned out to be the result of a DNA repair mechanism involving DNA synthesis (rather than swapping of pre-existing strands).5 These problems target an important aspect of the crucial experiment account, namely, the claim that the Meselson–Stahl experiment provided conclusive evidence against the rival conservative and dispersive hypotheses. The experiment did not definitively rule out these hypotheses in the case of E. coli and it certainly did not rule them out simpliciter. Evidence suggesting that it is unlikely that DNA is ever replicated via alternate mechanisms came much later, and is ultimately dependent on comparative studies showing that most of the DNA replication machinery is highly conserved. However, to this day no one can guarantee that alternate mechanisms are impossible or non-existent. 3.5 Not all possible hypotheses were considered and tested One argument for the crucial experiment account is that the number of possible mechanisms is restricted to a manageable number because any good explanation has to take into account topological constraints imposed by the chemical structure of the genetic material (Roush [2005], p. 15). In more general terms, it seems reasonable to assume that as the number of empirical and theoretical constraints increases, the number of explanations capable of satisfying them is bound to decrease—in some cases, down to only a handful of possibilities. However, just because it is possible to enumerate all possibilities does not mean that it is possible to design a crucial experiment capable of discriminating among all candidates simultaneously. Even if the total number of possibilities is relatively small, the required experimental design may still be exceedingly complex, to the point where practical difficulties outweigh the potential payoffs of a crucial experiment approach. In the case of DNA replication, there were more possibilities than those tested by the Meselson–Stahl experiment, both in terms of hypotheses about the distributions of atoms from parent to daughter DNA, and in terms of topological configurations. With respect to the latter, circular and supercoiled configurations were not taken into account. With respect to the former, it is important to keep in mind that all three hypotheses considered by Meselson and Stahl shared the assumption that multi-nucleotide stretches of the parental DNA strands are passed on to the progeny. They differed in that the semi-conservative hypothesis postulated that parental DNA is preserved as whole single strands, the conservative hypothesis postulated a complete preservation of the parental double-stranded DNA, while the dispersive hypothesis postulated that multi-nucleotide fragments of single-stranded parental DNA are preserved. What was neglected was the possibility that parental DNA is degraded and the resulting pieces are used to synthesize new DNA. This would have opened the door to a completely new type of replication mechanisms that relied on non-DNA molecular structures as intermediary templates. Furthermore, within the general class of semi-conservative mechanisms, it was not necessarily the case that both parental strands serve as a template for the synthesis of new strands. Alternative, ‘master strand’ hypotheses proposed that only one parental strand is used as a template for the synthesis of both daughter strands, while the other is degraded (Kubitschek and Henderson [1966]). A crucial experiment capable of simultaneously differentiating between all of these possibilities would have required that each hypothesis makes a distinct prediction. It seems doubtful that there are any tangible benefits to compensate for such a difficult-to-design and unlikely-to-succeed experiment. 3.6 The tested hypotheses were not mutually exclusive The number of candidate hypotheses is not the only difficulty standing in the way of a crucial experiment. A second complication arises when more than one hypothesis is true. This is far from trivial. A modest increase in the number of candidate hypotheses results in a combinatorial increase in the number of possible outcomes, while effectively invalidating any inferences about the truth or falsity of a given hypothesis from the evidence for the truth or falsity of alternative hypotheses. In order to better understand the issue, it is useful to consider the classical example of the corpuscular and wave theories of light (Duhem [1906], pp. 305–11). Since each theory entails different predictions about the speed of light in different media, it seemed possible to select the explanation that accounted for the observed results and at the same time to eliminate the one that failed to do so, as attempted by Foucault in his famous 1850 experiment. Duhem explicitly assumed that behind the two hypotheses tested by Foucault were two distinct models of light, each built on different premises (laws of motion applied to corpuscles versus the model of a vibrating object), contradictory assumptions about key aspects of reality (light has mass under corpuscular treatment, but has no mass if treated as a wave), and postulating seemingly irreconcilable ontologies (light either consists of corpuscles or is a wave travelling through a medium). These incompatibilities justified the mutual exclusivity of the wave and corpuscular theories of light. Hence, it seemed reasonable to assume that there were only two possible experimental outcomes, and that evidence for one theory automatically counted as evidence against its rival, while lack of evidence for one theory positively supported the other. In more general terms, if candidate hypotheses are mutually exclusive, this means, first, that the number of possible outcomes is significantly reduced because it cannot be the case that more than one hypothesis is true. Second, the interpretation of the results can benefit from an epistemic leveraging whereby the same evidence supports one hypothesis while providing reasons to reject other candidates. Granted, the effectiveness of such leveraging depends in no small measure on whether all the possibilities have been considered (van Fraassen [1989], p. 143; Duhem [1906], p. 311). However, in as much as possible hypotheses are mutually exclusive, scientists may still rely on a weaker form of disjunctive elimination compatible with probabilistic and abductive approaches to confirmation: failure to confirm one hypothesis is expected to increase the probability that the remaining conceived or unconceived alternatives are correct, while the corroboration of a hypothesis is expected to decrease the probability that other hypotheses are correct. Unfortunately, in the case of DNA replication, there were no prior reasons to believe that the three tested hypotheses were mutually exclusive. Figuring out possible distributions of atoms and topological configurations was just the physical chemistry side of the problem. However, there was still the biology part to be factored in: just because there is a limited number of possibilities in terms of replication mechanisms, this does not entail that any given organism cannot use various combinations of these mechanisms, nor does it guarantee that all living organisms will use the same mechanisms. The issue is obvious in Meselson and Stahl’s discussion of their inconclusive results with salmon sperm DNA. Facing a failure to replicate similar results with eukaryotic DNA, they speculated not only about the possibility that the mechanisms of DNA replication may vary significantly between prokaryotes and eukaryotes, but also about potential structural differences in the genetic material itself.6 The upshot of these considerations is that the positive data for the semi-conservative explanation generated in the Meselon-Stahl experiment was insufficient for a conclusive refutation of the conservative and dispersive explanations. Rather, the experiment only demonstrated that the semi-conservative hypothesis accounts for the observed results within the limits of detection of a particular experimental set-up, involving a particular organism. Beyond these limits, or in different organisms, alternate hypotheses could have still turned out to be true. The above is just an illustration of a common, yet often overlooked, obstacle limiting the possibility of designing and conducting crucial experiments in biology. The fact that a mechanism produces a certain phenomenon in one biological system offers no firm guarantees that the same or similar mechanisms produce the same or similar phenomena in other biological systems (Bechtel [2009]; Baetu [2016a]). Subtle and not so subtle differences are omnipresent, from variations in the highly conserved mechanisms that are shared by all living things (for example, slight variations of the genetic code, or the more significant differences between prokaryotic and eukaryotic genome expression mechanisms, which make possible the use of antibiotics), to differences in the mechanisms underpinning similar biological functions (for example, acquired immunity is a shared characteristic of all jawed vertebrates, yet the mechanisms underpinning it vary across species), to differences between individuals of the same species (for example, AIDS pathogenesis in humans is variable due, in part, to resistance mediated by truncated receptors in human cells and the presence or absence of specific mechanisms of defence). The limited potential for generalizability of mechanistic explanations is compounded by a second problem, namely, that even within the same biological system, the same phenomenon may be simultaneously generated by more than one mechanism, with some mechanisms having a higher biological relevance in one situation, but not another. Again, examples abound: Selection of the same phenotype may occur by means of different mechanisms in different populations of the same species; there is more than one DNA repair mechanism and more than one signal transduction pathway leading to the expression of a particular gene product; and more than one chemical mechanism may underlie any given biochemical reaction and any given chemical reaction may be catalysed by different enzymes with distinct sequences, structures, and modes of operation. Redundancy in and duplications of the components of mechanisms are also common. For example, there are several DNA and RNA polymerases in both prokaryotes and eukaryotes, all of which perform the same basic polymerase function, but in the context of different mechanisms (DNA repair versus replication) and substrates (ribosomal RNA, mitochondrial versus nuclear genes). The heterogeneity of biological systems hampers the ability to make inferences about the probable truth or falsity of one hypothesis given evidence supporting the truth or falsity of rival hypotheses. Evidence supporting a particular mechanistic hypothesis does not automatically demonstrate that the mechanism in question is the only possible mechanism, the mechanism uniquely responsible for producing a phenomenon in biological system, or the most prevalent mechanism in a physiologically relevant context. The evidence only supports the claim that the mechanism explains a phenomenon in the context of a specific experimental set-up. 4 The Historical and Scientific Significance of the Meselson–Stahl Experiment 4.1 The challenge for the crucial experiment account Given the above complications and difficulties, the challenge for the crucial experiment account is to show that the interpretation of the experimental results we now take for granted was justified in 1958, when Meselson and Stahl published their results. It might be argued that the additional experiments testing auxiliary assumptions are extensions of the Meselson–Stahl experiment aimed at enhancing the original experimental design in order to eliminate errors; variations of this argument can be found in Mayo’s ([1996]; Mayo and Spanos [2009]) more modest sense of error-statistical account of scientific reasoning, and in Marcum’s ([2007]) more ambitious sense of an experimental series, culminating in the elucidation of the mechanisms of DNA replication. The proposal is perfectly legitimate. After all, it is often the case that an experiment builds upon previous experiments, so that the results of the former are interpreted in light of previous experimental results. The Meselson–Stahl experiment is no exception to this rule. The 1958 paper includes data from the core experiment described in Section 2, plus a series of preparatory and auxiliary experiments required for the interpretation of the main experiment's results. Thus understood, the experiment extended over a decade or more. However, the crucial experiment account attributes all or most of the impact of the whole series of experiments to a single set of experimental results published in 1958. This kind of discrepancy led Lakatos ([1970]) and Hacking ([1983]) to argue that crucial experiment accounts are the result of retrospective reconstructions that may explain the importance we attribute to an experimental finding today, but fail to account for the significance of the finding at the time of publication. In response, Weber ([2009]) argues that there are sometimes reasons other than experimental confirmation to accept some assumptions as likely to be correct. As discussed in Section 3.2, his proposal hinges on the notion that among a given set of explanations that satisfy the relevant empirical and theoretical constraints, some explanations will be preferable to others because of their epistemic virtues, such as simplicity. My concern is that even though such epistemic virtues guide scientific research, they are not adequate surrogates for epistemic justification. If anything, the case study demonstrates that simplicity can favour false assumptions. 4.2 A mechanistic perspective on the experiment In the remaining sections of this article, I will attempt to provide an account of the historical importance of the Meselson–Stahl experiment despite an incertitude about many of the background assumptions required for the interpretation of the results and the overall fragility of the experimental circumstances. This account takes at face value the ‘official’ assessment of the significance of the experimental results, as stated by Meselson and Stahl ([1958], p. 681) in the concluding lines of their paper. Instead of making claims about how the semi-conservative explanation was favoured against its rivals, they carefully describe the experiment in neutral terms, as providing evidence about the distribution of atoms in newly synthesized DNA, treating this evidence as one additional piece of information about the molecular mechanisms underpinning DNA replication in E. coli: The results presented here give a detailed answer to the question of this distribution [of parental atoms among progeny molecules] and simultaneously direct our attention to other problems whose solution must be the next step in progress toward a complete understanding of the molecular basis of DNA duplication. What are the molecular structures of the subunits of E. coli DNA which are passed on intact to each daughter molecule? What is the relationship of these subunits to each other in a DNA molecule? What is the mechanism of the synthesis and dissociation of the subunits in vivo? ([1958], p. 681) Two important points are explicitly made in the above quotation. First, the experiment was part of a broader research project aiming to provide a detailed, step-by-step description of the mechanism or mechanisms underpinning DNA replication (Figure 2, top).7 From the perspective of biochemistry and molecular biology, the ultimate goal was not to determine if DNA replication is semi-conservative, conservative, or dispersive. These terms refer to preliminary mechanism sketches, as understood in (Machamer et al. [2000])—that is, to incomplete descriptions of putative mechanisms. The immediate debate was about the chemical reactions involved in DNA replication. The ultimate goal was an elucidation of the mechanisms of DNA replication. The three main hypotheses considered in the scientific literature at the time had direct implications for the reactants involved in DNA synthesis, postulating a role for parental DNA as a direct template for the synthesis of new DNA. They disagreed about how it would act as a template, namely, whether it would act as double-stranded DNA, single-stranded DNA, or fragments of single-stranded DNA (Figure 2, middle). Knowledge about how parent DNA nitrogen atoms are redistributed in the progeny DNA (which proportion of them, and to which of the two strands of the progeny DNA) was meant to provide the first clues about the chemical reactions involved in DNA replication. In turn, information about these chemical reactions was needed to gain some insights into the chemical building blocks, metabolic requirements, and enzymatic machinery required to carry out the reactions. These details were essential to the elucidation of the mechanism or mechanisms of DNA replication (Figure 2, bottom). Figure 2. View largeDownload slide Hypotheses guiding research and the subsequent elucidation of the mechanistic details. Figure 2. View largeDownload slide Hypotheses guiding research and the subsequent elucidation of the mechanistic details. From the perspective of a mechanistic project, positive or corroborating data are much more valuable than negative or falsifying data. Given that mechanistic explanations are seldom mutually exclusive (Section 3.6), showing how a phenomenon is not produced does not automatically tell us that the phenomenon cannot be produced by that mechanism in some other experimental set-up. Moreover, it usually doesn’t reveal anything about how the phenomenon is actually produced. On the other hand, positive evidence for one, two, or all three hypotheses considered by Meselson and Stahl would have provided data needed to elucidate the biochemistry of DNA replication. As argued earlier, Meselson and Stahl relied on a semi-quantitative detection technique that did not demonstrate that replication is exclusively semi-conservative (Section 3.4). It is interesting to note that neither Meselson and Stahl nor other research groups, including rivals, felt the need to conduct more sensitive experiments aimed at demonstrating that DNA is never replicated via a conservative or dispersive mechanism. I take this to be an indication that the relevant result was not the definitive rejection of alternate hypotheses, but the positive evidence that the genomic material is replicated in biologically significant proportions via a semi-conservative mechanism. That was all that was needed to proceed to the next step of the investigation, namely, the elucidation of the chemistry underlying DNA synthesis and the molecular machinery responsible for replicating DNA. From this perspective, we wouldn’t think any less of the Meselson–Stahl experiment had it turned out that DNA is also replicated via a dispersive, conservative, or some other kind of mechanism. 4.3 The experimental design and its independence from theoretical speculations The second point to note in the conclusion of the 1958 paper is that, technically speaking, the experiment tracked what went into and what came out of a series of chemical reactions. One of the most important scientific achievements of Meselson and Stahl was the development of experimental techniques capable of distinguishing between the inputs and outputs of reactions involving DNA, thus making it possible to investigate not only DNA replication, but many other mechanisms involving nucleic acids, such as DNA repair, recombination, and transcription. More specifically, the experiment provided information about the ‘distribution of parental atoms among progeny molecules’ by comparing the relative densities of isotope-labelled and -unlabelled DNA species extracted from dividing bacteria. Thus, the experimental results were statements of the form ‘approximately x% of parental N atoms are passed to one or both daughter DNA strands in the nth cycle of replication’. The testing of the three main hypotheses circulating in the literature at the time was incorporated into the experimental design in a second step. Meselson and Stahl took advantage of the fact that for select values of the variables in the general form of the experimental results, the results could match predictions made by these hypotheses. However, an unambiguous one-to-one mapping of experimental results to predictions would obtain only if certain assumptions were correct. The most significant of these was the simplifying assumption that DNA replication proceeds primarily via a single mechanism. This assumption served a very important purpose, namely, it created a disjunction in the space of possible explanations, analogous to that between the corpuscular and wave theories of light in Duhem’s example. Ignoring for a moment the additional foreseen and unforeseen complications that arise from experimental procedures, DNA topologies, and the possibility of additional replication mechanisms, the Meselson–Stahl experiment comes closest to the ideal of a crucial experiment in respect to a rather interesting strategy of ‘holistic confirmation’ (Baetu [2013], [2016a]). The general idea goes as follows: if three possible outcomes are predicted in conjunction with the auxiliary assumption that replication proceeds via only one mechanism, then the fact that one of the three predicted outcomes was observed (as opposed to an unpredicted banding pattern) supports not only the hypothesis that predicted it, but also the auxiliary assumption necessary for its prediction.8 As discussed in Section 3, a number of complications blurred an unambiguous one-to-one mapping of results and predictions, thus mitigating the success of the strategy and undermining the crucial experiment account. Notwithstanding this, holistic confirmation did pay off in the sense that it succeeded in establishing a link between theoretically motivated hypotheses about DNA replication and the data provided by a labelling experiment. This link was eventually strengthened by additional experiments, until the gap between the results of the Meselson–Stahl experiment and the prediction of semi-conservative hypothesis was finally bridged. The crucial experiment account focuses on the link between the results of the Meselson–Stahl experiment and previously proposed hypotheses about DNA replication. In doing so, it emphasizes a ‘top-down’ scientific methodology whereby researchers begin by theorizing, then derive predictions in order to test their hypotheses. It is not my intention to deny the importance of this link for the overall mechanistic project. Rather, my complaint is that the crucial experiment account downplays the fragility of the top-down link and its dependence on many unproven assumptions, while obscuring the fact that the experiment would have produced data about the biochemistry of DNA replication independently of the testing of any pre-existing hypotheses about DNA replication. In parallel with questions about what predictions would be entailed by theoretical speculations, Meselson and Stahl engaged another, more general and robust ‘bottom-up’ experimental project: they asked whether it would be possible to distinguish experimentally between the inputs and outputs of the chemical reactions underlying DNA replication. As pointed out earlier, they succeeded in designing an experiment yielding results of the form ‘approximately x% of parental N atoms are passed to one or both daughter DNA strands in the nth cycle of replication’. Such statements already amounted to meaningful information about the biochemistry of DNA replication. Thus, the scope of the experiment was much wider than the testing of the three main hypotheses available at the time, and it could have succeeded in revealing something about the reactants involved in DNA replication, even if none of these hypotheses had turned out to be true. As for the scope of the general experimental design, it extended even further. Not only were higher resolutions versions of the same experimental design used to test some of the auxiliary assumptions required to conclusively support the semi-conservative hypothesis, they were also used to investigate other mechanisms involving nucleic acids (Davis [2004]; Hanawalt [2004]). 4.4 The advantages of a bottom-up mechanistic account If my reconstruction of the experimental design is correct, then it is more accurate to describe the Meselson–Stahl experiment as generating data about the biochemistry of DNA replication. The data were compatible with, and in conjunction with other experiments conclusively supported, the semi-conservative hypothesis put forward by Watson and Crick. This account offers some important advantages over the crucial experiment account. First, it can explain the historical and scientific significance of the Meselson–Stahl experiment without having to argue that the assumptions required by the crucial experiment account were somehow justified in 1958, despite the fact that they only later received experimental support. The significance of the experiment lies primarily in the techniques it employed and their discovery potential, as is often emphasized in scientific appraisals of the experiment. Second, my account has the advantage of aligning the Meselson–Stahl experiment with the general practice of biochemistry and molecular biology, instead of treating it as a peculiar experiment more akin to those conducted in physics. A discovery process involving a progression from the description of a phenomenon to an initial proposal of a mechanistic sketch, and then to the elucidation of the missing mechanistic details, as depicted in Figure 2, is characteristic of research in the life sciences (Bechtel [2006]; Darden [2006]; Craver [2007]; Bechtel and Richardson [2010]; Craver and Darden [2013]). Third, this approach takes into account the end-point targeted by the scientists involved in the debate. The crucial experiment account places an undue emphasis on the proposal and testing of mechanistic sketches, while obscuring both the experimental achievements and the fact that the debate was about the biochemistry of DNA synthesis. Finally, a bottom-up mechanistic account accurately matches the interpretation of the experimental results Meselson and Stahl themselves felt confident and justified in presenting to the scientific community. 5 Conclusion There are undoubtedly limitations to what can be concluded about the role of crucial experiments in science at large based on the particular case of the Meselson–Stahl experiment. I certainly do not mean to suggest that there are no examples of reasonably tight crucial experiments or that such experiments cannot be conducted as a matter of principle. On the contrary, what emerged from my analysis is that under-determination issues can be kept under control. The analysis also revealed that certain factors are likely to facilitate or obfuscate attempts to design and conduct crucial experiments. One such factor is the presence (or absence) of theoretical rationales justifying the partitioning of the space of possible explanations into mutually exclusive scenarios. Such a partitioning is expected to have a rather sizeable positive (or negative) impact in terms of pruning down the number of possible experimental outcomes to a minimum. It is also expected to provide some measure of disjunctive leveraging, whereby evidence for one hypothesis counts as evidence against its rivals, while lack of evidence for rivals counts as evidence for the remaining hypotheses. Unfortunately, research in biological sciences is more likely to fall under the negative scenario, given that mechanistic explanations are not mutually exclusive. A second factor hinges on the possibility of harnessing positive results in order to holistically confirm a hypothesis along with the auxiliary assumptions that are jointly required to make a correct prediction. The Meselson–Stahl experiment demonstrates that such harnessing can be put to good use, although some caveats are easily foreseeable. The strategy is unlikely to work when there are a large number of auxiliary assumptions, and it is likely to fail if distinct sets of auxiliary assumptions yield the same prediction in conjunction with the tested hypothesis. Finally, crucial experiments are associated with a top-down, theory-guiding-experiment methodological approach. However, biological mechanisms are often elucidated in a bottom-up fashion, by building causal networks and then possible mechanisms from experimental results about causally relevant factors (Craver [2007]; Baetu [2012], [2016b]; Woodward [2013]). In such a scenario, there is neither much opportunity nor any genuine need to carry out crucial experiments. Footnotes 1 ‘The results of the present experiment are in exact accord with the expectations of the Watson–Crick model for DNA duplication. However, it must be emphasized that it has not been shown that the molecular subunits found in the present experiment are single polynucleotide chains or even that the DNA molecules studied here correspond to single DNA molecules possessing the structure proposed by Watson and Crick’ (Meselson and Stahl [1958], p. 678). 2 Delbrück ([1954]) also attempted to provide a non-empirical justification based on topological considerations. He correctly predicted the supercoiling of the unwinding DNA helix as it is replicated and argued that a dispersive mechanism of DNA replication is necessary in order to release the building tension in the replicating DNA—postulating, among other things, the existence of ‘nicks’ in the DNA that allow it to untwist. In the end, Delbrück’s dispersive mechanism was abandoned, yet key elements of his account were incorporated in the mechanism of DNA replication as we understand it today, most notably the coiling of the DNA during replication and the cleavage of DNA, as well as other means of releasing the building coiling tension (see details in Figure 2, bottom). 3 More precisely, Weber ([2009], p. 38) frames simplicity in terms of an absence of mechanisms other than the ones postulated by the hypothesis in question: ‘alternative mechanisms would require add-on mechanisms or “epicycles” in order to explain the Meselson–Stahl data’. 4 In such cases, researchers rely on experiments demonstrating that interventions on molecular structures have an effect on biological activities (Baetu [2012]; Craver [2007]; Woodward [2013]). Perhaps the best known varieties of this kind of experiment are the ‘knock-out’ and the ‘transgenic’ experiments, whereby the elimination or addition of a particular molecular component results in a loss or gain of biological activity. 5 Experiments relying on different techniques and experimental set-ups also revealed the existence of a variety of inter- and intra-molecular DNA secondary structures, including quadruplex DNA (Keniry [2001]). The physiological relevance of quadruplex DNA is still debated, although it may play a role in regulating and facilitating DNA replication (Bochman et al. [2012]). 6 ‘On the one hand, if we assume that salmon DNA contains subunits analogous to those found in E. coli DNA, then we must suppose that the subunits of salmon DNA are bound together more tightly than those of the bacterial DNA. On the other hand, if we assume that the molecules of salmon DNA do not contain these subunits, then we must concede that the bacterial DNA molecule is a more complex structure than is the molecule of salmon DNA. The latter interpretation challenges the sufficiency of the Watson–Crick DNA model to explain the observed distribution of parental nitrogen atoms among progeny molecules’ (Meselson and Stahl [1958], p. 681). 7 It has been argued that some of the most successful explanations in the life sciences amount to descriptions of mechanisms (Bechtel [2006], [2008]; Craver [2007]; Darden [2006]; Wimsatt [1972]). Machamer et al. ([2000], p. 3) define mechanisms as ‘entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions’. 8 A more ambitious version manages the holistic confirmation of two auxiliary assumptions: if three possible outcomes are predicted, in conjunction with the auxiliary assumptions that parental DNA is not degraded during replication and that replication proceeds via only one mechanism, then the fact that a semi-conservative outcome was observed supported not only the hypothesis that predicted it, but also the two auxiliary assumptions necessary for the prediction. In contrast, if a dispersive pattern had been observed, then the results would have been inconclusive since it was possible that the banding pattern was produced by dispersive replication and/or the alternate mechanisms involving the degradation of the parental DNA. Acknowledgements I would like to thank Richard Dawid, Laura Nuño de la Rosa, Dan Nicholson, Katinka Quintelier, as well as two anonymous reviewers for helpful discussion and comments on earlier drafts. This work was in part supported by a generous fellowship from the Konrad Lorenz Institute for Evolution and Cognition Research. References Baetu T. M. [ 2012 ]: ‘ Filling in the Mechanistic Details: Two-Variable Experiments as Tests for Constitutive Relevance ’, European Journal for Philosophy of Science , 2 , pp. 337 – 53 . Google Scholar CrossRef Search ADS Baetu T. M. [ 2013 ]: ‘ Chance, Experimental Reproducibility, and Mechanistic Regularity ’, International Studies in History and Philosophy of Science , 27 , pp. 255 – 73 . Baetu T. M. [ 2016a ]: ‘ The “Big Picture”: The Problem of Extrapolation in Basic Research ’, British Journal for the Philosophy of Science , 67 , pp. 941 – 64 . Google Scholar CrossRef Search ADS Baetu T. M. [ 2016b ]: ‘ From Interventions to Mechanistic Explanations ’, Synthese , 193 , pp. 3311 – 27 . Google Scholar CrossRef Search ADS Baldwin R. L. , Shooter E. M. [ 1963 ]: ‘ The Alkaline Transition of BU-Containing DNA and Its Bearing on the Replication of DNA ’, Journal of Molecular Biology , 7 , pp. 511 – 26 . Google Scholar CrossRef Search ADS PubMed Bechtel W. [ 2006 ]: Discovering Cell Mechanisms: The Creation of Modern Cell Biology , Cambridge : Cambridge University Press . Bechtel W. [ 2008 ]: Mental Mechanisms: Philosophical Perspectives on Cognitive Neuroscience , New York : Routledge . Bechtel W. [ 2009 ]: ‘ Generalization and Discovery by Assuming Conserved Mechanisms: Cross-Species Research on Circadian Oscillators ’, Philosophy of Science , 76 , pp. 762 – 73 . Google Scholar CrossRef Search ADS Bechtel W. , Richardson R. [ 2010 ]: Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research , Cambridge, MA : MIT Press . Bloch D. P. [ 1955 ]: ‘ A Possible Mechanism for the Replication of the Helical Structure of Desoxyribonucleic Acid ’, Proceedings of the National Academy of Science , 41 , pp. 1058 – 64 . Google Scholar CrossRef Search ADS Bochman M. L. , Paeschke K. , Zakian V. A. [ 2012 ]: ‘ DNA Secondary Structures: Stability and Function of G-Quadruplex Structures ’, Nature Reviews Genetics , 13 , pp. 770 – 80 . Google Scholar CrossRef Search ADS PubMed Cavalieri L. F. , Rosenberg B. H. , Deutsch J. F. [ 1959 ]: ‘ The Subunit of Deoxyribonucleic Acid ’, Biochemical and Biophysical Research Communications , 1 , pp. 124 – 8 . Google Scholar CrossRef Search ADS Craver C. [ 2007 ]: Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience , Oxford : Oxford University Press . Craver C. , Darden L. [ 2013 ]: In Search of Biological Mechanisms: Discoveries across the Life Sciences , Chicago, IL : University of Chicago Press . Google Scholar CrossRef Search ADS Darden L. [ 2006 ]: Reasoning in Biological Discoveries: Essays on Mechanisms, Interfield Relations, and Anomaly Resolution , Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Davis T. H. [ 2004 ]: ‘ Meselson and Stahl: The Art of DNA Replication ’, Proceedings of the National Academy of Science , 101 , pp. 17895 – 96 . Google Scholar CrossRef Search ADS Delbrück M. [ 1954 ]: ‘ On the Replication of Deoxyribonucleic Acid ’, Proceedings of the National Academy of Science , 40 , pp. 783 – 8 . Google Scholar CrossRef Search ADS Duhem P. [ 1906 ]: The Aim and Structure of Physical Theory , Princeton, NJ : Princeton University Press . Franklin A. [ 2007 ]. ‘The Role of Experiments in the Natural Sciences: Examples from Physics and Biology’, in Kuipers T. (ed.), General Philosophy of Science: Focal Issues , Amsterdam : Elsevier , pp. 275 – 302 . Hacking I. [ 1983 ]: Representing and Intervening , Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Hanawalt P. C. [ 2004 ]: ‘ Density Matters: The Semiconservative Replication of DNA ’, Proceedings of the National Academy of Science , 101 , pp. 17889 – 94 . Google Scholar CrossRef Search ADS Holmes F. L. [ 2001 ]: Meselson, Stahl, and the Replication of DNA: A History of the Most Beautiful Experiment in Biology , New Haven, CT : Yale University Press . Google Scholar CrossRef Search ADS Keniry M. A. [ 2001 ]: ‘ Quadruplex Structures in Nucleic Acids ’, Biopolymers , 56 , pp. 123 – 46 . Google Scholar CrossRef Search ADS Kubitschek H. E. , Henderson T. R. [ 1966 ]: ‘ DNA Replication ’, Proceedings of the National Academy of Science , 55 , pp. 512 – 19 . Google Scholar CrossRef Search ADS Lakatos I. [ 1970 ]: ‘Falsification and the Methodology of Scientific Research Programmes’, in Lakatos I. , Musgrave A. (eds), Criticism and the Growth of Knowledge , Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Machamer P. , Darden L. , Craver C. [ 2000 ]: ‘ Thinking about Mechanisms ’, Philosophy of Science , 67 , pp. 1 – 25 . Google Scholar CrossRef Search ADS Marcum J. A. [ 2007 ]: ‘ Experimental Series and the Justification of Temin’s DNA Provirus Hypothesis ’, Synthese , 154 , pp. 259 – 92 . Google Scholar CrossRef Search ADS Mayo D. G. [ 1996 ]: Error and the Growth of Experimental Knowledge , Chicago, IL : University of Chicago Press . Google Scholar CrossRef Search ADS Mayo D. G. , Spanos A. [ 2009 ]: Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science , Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Meselson M. , Stahl F. W. [ 1958 ]: ‘ The Replication of DNA in Escherichia coli ’, Proceedings of the National Academy of Science , 44 , pp. 671 – 82 . Google Scholar CrossRef Search ADS Pettijohn D. , Hanawalt P. C. [ 1964 ]: ‘ Evidence for Repair-Replication of Ultraviolet Damaged DNA in Bacteria ’, Journal of Molecular Biology , 9 , pp. 395 – 410 . Google Scholar CrossRef Search ADS PubMed Putnam H. [ 1991 ]: ‘The “Corroboration” of Theories’, in Boyd R. , Gasper P. , Trout J. D. (eds), The Philosophy of Science , Cambridge, MA : MIT Press . Quine W. V. [ 1951 ]: ‘ Two Dogmas of Empiricism ’, Philosophical Review , 60 , pp. 20 – 43 . Google Scholar CrossRef Search ADS Rolfe R. [ 1962 ]: ‘ The Molecular Arrangement of the Conserved Subunits of DNA ’, Journal of Molecular Biology , 4 , pp. 22 – 30 . Google Scholar CrossRef Search ADS PubMed Roush S. [ 2005 ]: Tracking Truth: Knowledge, Evidence, and Science , Oxford : Oxford University Press . Google Scholar CrossRef Search ADS Taylor J. H. , Woods P. S. , Hughes W. L. [ 1957 ]: ‘ The Organization and Duplication of Chromosomes as Revealed by Autoradiographic Studies Using Tritium-Labeled Thymidine ’, Proceedings of the National Academy of Science , 43 , pp. 122 – 7 . Google Scholar CrossRef Search ADS Ts'o P. O. P. [ 1974 ]: Basic Principles in Nucleic Acid Chemistry , Volume 2 , London : Academic Press . van Fraassen B. C. [ 1989 ]: Laws and Symmetry , New York : Oxford University Press . Google Scholar CrossRef Search ADS Watson J. D. , Crick F. H. [ 1953 ]: ‘ Genetical Implications of the Structure of Deoxyribonucleic Acid ’, Nature , 171 , pp. 964 – 7 . Google Scholar CrossRef Search ADS PubMed Weber M. [ 2009 ]: ‘ The Crux of Crucial Experiments: Duhem’s Problems and Inference to the Best Explanation ’, British Journal for the Philosophy of Science , 60 , pp. 19 – 49 . Google Scholar CrossRef Search ADS Wimsatt W. C. [ 1972 ]. ‘Complexity and Organization’, in Schaffner K. F. , Cohen R. S. (eds), Proceedings of the Philosophy of Science Association , Dordrecht : Reidel , pp. 67 – 86 . Woodward J. [ 2013 ]: ‘ Mechanistic Explanation: Its Scope and Limits ’, Proceedings of the Aristotelian Society , 87 , pp. 39 – 65 . Google Scholar CrossRef Search ADS © The Author 2017. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The British Journal for the Philosophy of Science Oxford University Press

On the Possibility of Crucial Experiments in Biology

Loading next page...
 
/lp/ou_press/on-the-possibility-of-crucial-experiments-in-biology-EzdpK8A0w7
Publisher
Oxford University Press
Copyright
© The Author 2017. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com
ISSN
0007-0882
eISSN
1464-3537
D.O.I.
10.1093/bjps/axx013
Publisher site
See Article on Publisher Site

Abstract

Abstract The article analyses in detail the Meselson–Stahl experiment, identifying two novel difficulties for the crucial experiment account, namely, the fragility of the experimental results and the fact that the hypotheses under scrutiny were not mutually exclusive. The crucial experiment account is rejected in favour of an experimental-mechanistic account of the historical significance of the experiment, emphasizing that the experiment generated data about the biochemistry of DNA replication that is independent of the testing of the semi-conservative, conservative, and dispersive hypotheses. 1 Introduction 2 The Meselson–Stahl Experiment 3 Some Difficulties for the Crucial Experiment Account 3.1 Additional experiments were required 3.2 Simplicity considerations are unconvincing 3.3 The fragility of the experimental results 3.4 The problematic interpretation of falsifyng results 3.5 Not all possible hypotheses considered or tested 3.6 The tested hypotheses were not mutually exclusive 4 The Historical and Scientific Significance of the Meselson–Stahl Experiment 4.1 The challenge for the crucial experiment account 4.2 A mechanistic perspective on the experiment 4.3 The experimental design and its independence from theoretical speculations 4.4 The advantages of a ‘bottom-up’ mechanistic account 5 Conclusion 1 Introduction The modern notion of the crucial experiment emerged from the analysis of historical episodes where a single experiment seems to have conclusively sealed the fate of two or more competing hypotheses. The strategy behind such experiments hinges on the testing of the hypotheses under scrutiny relative to an aspect of empirical reality about which each of the competing parties makes a different prediction. The result can shift the balance in favour of the hypothesis making the correct prediction and against rivals that fail to do so. What differentiates crucial experiments from other kinds of experiments is that the former don’t only conclusively support a hypothesis, they also provide good reasons to reject rival hypotheses. As philosophers of science like to point out, things are not quite as simple. The most famous challenge to a straightforward interpretation of the results of a crucial experiment is the under-determination of scientific theory by evidence. One argument from under-determination is the problem of confirmation holism: Since the derivation of testable predictions usually requires additional auxiliary hypotheses, it is seldom possible to test a scientific hypothesis in isolation. As a result, falsifying results give no clear information about whether the tested or the auxiliary hypotheses are false (Duhem [1906], pp. 303–4). Moreover, by choosing a different set of auxiliary assumptions, it may be possible to save a hypothesis despite falsifying results (Quine [1951]; Putnam [1991]). A second argument from under-determination is the problem of unconceived alternatives: Even if the falsification of rival hypotheses is conclusive, this does not allow us to infer that the remaining hypothesis is true. A hypothesis cannot be confirmed against its rivals by means of crucial experiments because it cannot be ascertained that all possible alternatives have been considered (Duhem [1906], p. 311). A similar shortcoming plagues abductive attempts to infer that the explanation that best responds to a set of epistemic virtues is true or the most likely to be true: the best explanation may simply be the best of a bad lot of false explanations (van Fraassen [1989], p. 143). In response to these challenges, it has been argued that good science cannot rely on ad hoc assumptions whose sole purpose is to save hypotheses from being falsified (Lakatos [1970]). The argument from unconceived possibilities can be weakened by showing that it is sometimes possible to consider all possibilities, for instance, because only a very limited number of hypotheses can explain all empirical data while remaining compatible with accepted theoretical frameworks (Roush [2005], p. 15; Franklin [2007]). This also undermines the ‘bad lot’ argument, since these hypotheses are good hypotheses capable of explaining available data (Weber [2009]). Even if there could be other explanations that had not been taken into consideration, they would not be more theoretically accurate or plausible than the ones already under consideration. Furthermore, a combination of evidence and simplicity considerations can favour a hypothesis against its rivals. For instance, if the hypothesis favoured by the experimental results is also sufficient to explain a phenomenon without introducing additional assumptions, then we have no reasons to prefer a rival that by itself is incapable of accounting for the same results without adding further ‘epicycles’ to the explanatory story (Weber [2009]). In light of such considerations, some authors—most notably Allan Franklin ([2007]), Sherrilyn Roush ([2005]), and Marcel Weber ([2009])—proceed to argue that there are at least some unquestionable examples of successful crucial experiments in the history of science, of which the Meselson–Stahl experiment stands out as one of the most striking illustrations. In this article, I re-examine the Meselson–Stahl experiment (described in Section 2), identifying several difficulties for the crucial experiment account (Section 3). My critique is not based on the unconceived possibilities and the bad lot arguments, which I take to be successfully defused by Franklin, Roush, and Weber. I do think that Duhem was right in worrying about confirmation holism, and I identify several auxiliary assumptions that troubled scientists at the time the experiment was conducted. However, my analysis also shows that most auxiliary hypotheses have been independently tested, meaning that the difficulties raised by confirmation holism are not insurmountable. Hence, my arguments are not based on the premise that issues related to confirmation holism are intractable or that they have been left unsolved. I am instead concerned with the retrospective reconstruction objection (Lakatos [1970]; Hacking [1983]); in this case, the fact that distinct auxiliary hypotheses required testing over an extended period of time, which undermines both the epistemic claim that a single set of experimental results supported one hypothesis while rejecting all others, as well as the historical claim that the fate of the tested hypotheses was conclusively decided at the time when the results of the experiment were published. I elaborate in more detail the issue of auxiliary assumptions, making a case for the fragility of the experimental results by showing how slight variations of the experimental procedures would have failed to replicate the results. Another set of difficulties target the possibility of designing crucial experiments in biology. I argue that even if it is plausible to assume that all possible hypotheses could have been listed, this does not mean that all these possibilities could have been simultaneously tested. Moreover, there were (and still are) no reasons to believe that alternate mechanistic explanations in biology are mutually exclusive. This leads to a combinatorial increase in the number of possible explanations, as well as to an inability to leverage a rejection of rival hypotheses given support for the leading hypothesis. In these respects, the Meselson–Stahl experiment was quite different from paradigmatic examples of crucial experiments from physics. The critical assessment of the crucial experiment account is followed by an attempt to explain the historical significance of the experiment based on the interpretation of the experimental results Meselson and Stahl presented to the scientific community in the conclusion of their 1958 paper (Section 4). I propose that the experiment was part of a broader research project aiming to elucidate the mechanisms of DNA replication. From the perspective of this project, the historical significance of the Meselson–Stahl experiment had little to do with the expectations raised by the crucial experiment account. The primary goal of the experiment was to provide information about the transfer of atoms from parent to daughter DNA, information in turn required to gain further insights into the chemical reactions involved in DNA synthesis. This objective could have been achieved irrespective of the rejection of rival hypotheses, and some information would have been gained even if none of the candidate hypotheses would have been conclusively supported. Finally, in the conclusion of the article (Section 5), I summarize my findings, drawing some general implications for philosophy of science. 2 The Meselson–Stahl Experiment The elucidation of the structure of DNA suggested the possibility of a semi-conservative mechanism of DNA replication (Watson and Crick [1953]): each of the two stands of the original DNA duplex serves as a template for the synthesis of a new complementary strand (Figure 1, left). Alternatively, the conservative hypothesis (Figure 1, centre) postulated that proteins bound to the DNA duplex distort it in such a way that both strands are exposed for hydrogen bonding and a new DNA duplex is synthesized (Bloch [1955]). According to the dispersive replication hypothesis (Figure 1, right), DNA is unwound into single strands, breaks are induced to eliminate supercoiling, complementary strands are then synthesized, and the segments are joined back together in two molecules of ‘patched’ duplex DNA, containing alternating pieces from the original and the newly synthesized DNA (Delbrück [1954]). Figure 1. View largeDownload slide The Meselson–Stahl experiment. Figure 1. View largeDownload slide The Meselson–Stahl experiment. Meselson and Stahl ([1958]) devised an experimental set-up that would allow them to distinguish between pre-existing parental DNA molecules and newly synthesized ones. They took advantage of the fact that it is possible to generate denser DNA species by feeding bacteria a nitrogen source containing the heavy 15N isotope, which is gradually incorporated into newly synthesized nitrogen-containing macromolecules such as DNA. Denser DNA containing the 15N isotope could then be separated from lighter DNA containing 14N by means of a density gradient sedimentation technique known as isopycnic centrifugation. Bacteria were grown in a heavy 15N medium for several generations. As a result, the DNA extracted from these bacteria was denser than light DNA containing only 14N and hybrid DNA containing a mixture of 14N and 15N. After transferring bacteria on a light 14N growth medium, the following approximate ratios between the percentage of light, hybrid, and heavy DNA were observed: 0:100:0 for the first generation; 50:50:0 for the second generation; 75:25:0 for the third generations; and 90:10:0 for the fourth generation. The observed pattern was consistent with a primarily semi-conservative mode of replication. If replication had been primarily conservative, only light and heavy DNA should have been expected (50:0:50 for the first generation, 75:0:25 for the second generation, and so on). If replication had been primarily dispersive, all of the DNA would have been of an intermediate density, becoming lighter and lighter with each generation. The above rendering is compatible with the claim that the experimental results supported the semi-conservative hypothesis against the conservative and dispersive alternatives (Roush [2005], pp. 14–6; Franklin [2007], pp. 236–42). A more complex version of this argument states that the semi-conservative hypothesis was positively selected as the best explanation via an abductive reasoning favouring the simplest explanation that would not require the introduction of additional assumptions about what happens during the experimental procedure (Weber [2009]). 3 Some Difficulties for the Crucial Experiment Account There are, however, reasons to doubt that the Meselson–Stahl experiment was a crucial experiment as typically understood by philosophers of science. There are reasons to doubt that the experimental results published in 1958 were sufficient to conclusively support the semi-conservative hypothesis (Sections 3.1–3.3) and to definitively refute the conservative and dispersive hypotheses (Section 3.4); in both cases, additional experiments were required. There are also reasons to doubt that the experimental design adopted by Meselson and Stahl aligns with typical examples of crucial experiments from physics, given that not all hypotheses available at the time were tested, and that the hypotheses considered were not mutually exclusive (Sections 3.5–3.6). 3.1 Additional experiments were required One concern amply discussed in the scientific literature at the time was that, depending on the configuration of the DNA molecules during the centrifugation stage of the experiment, the results were compatible with several hypotheses. Additional experiments by Meselson and Stahl showed that heat-denatured hybrid DNA (where the H-bonds between two strands of DNA are broken by heat) separates into two subunits, each with half the molecular weight of the whole molecule, but different densities. This result ruled out a primarily dispersive mechanism of replication. However, this result did not rule out a conservative mechanism. Meselson and Stahl explicitly acknowledge that the interpretation of the results depicted in Figure 1 relies on the assumption that the observed bands do not consist of end-to-end or laterally associated linear DNA fragments, as suggested by some proponents of conservative replication (Cavalieri et al. [1959]).1 It is only in conjunction with a series of subsequent experiments that the Meselson–Stahl experiment can conclusively show that DNA is primarily replicated in a semi-conservative fashion. In particular, double-labelling 13C/15N experiments showed that DNA fragments do not reassociate in an end-to-end configuration (Rolfe [1962]), while studies using 5-bromouracil (a nucleotide-like compound that is integrated in DNA and alters its pH) provided evidence that DNA double helices do not associate laterally in order to form quadruplex DNA (Baldwin and Shooter [1963]). Thus, despite its indisputable impact, the Meselson–Stahl experiment did not singlehandedly support the semi-conservative hypothesis, but required the assistance of additional experiments in order to do so. On one hand, the episode illustrates Duhem’s concern that the interpretation of the experimental results requires additional auxiliary assumptions about what happens during the experimental procedure. On the other hand, it also shows that it is possible to adequately address these issues related to confirmation holism by systematically testing auxiliary assumptions, providing conclusive evidence for or against a given hypothesis. However, two problems remain for the crucial experiment account. First, distinct hypotheses were independently tested, meaning that there was no single experimental result supporting one hypothesis against its rivals. Second, the testing of various auxiliary assumptions spanned another decade of research, undermining the historical significance of the results as understood on the crucial experiment account, namely, that the fate of the three tested hypotheses was conclusively decided in 1958, when the results were published. 3.2 Simplicity considerations are unconvincing In an attempt to solve the problem, Weber ([2009]) argues that the semi-conservative hypothesis was positively selected as the best explanation from the very beginning because it relied on the simplest interpretation of the results yielded by the experiment.2 Simplicity is defined here as the ability to account for a phenomenon or experimental result by relying on fewer auxiliary assumptions about what goes on during the experiment. As pointed out earlier, the conservative mechanism also could have explained the observed banding pattern in the Meselson–Stahl experiment. However, Weber argues, the latter is adequate only if we further assume, first, that DNA is sheared during the experiment and, second, that the resulting fragments reassociate. In contrast, the semi-conservative hypothesis is simpler because it can account for the observed results without any need for additional assumptions.3 Perhaps the immediate objection that comes to mind against this line of argument is that there is no record in the primary scientific literature supporting the claim that simplicity played any role in the acceptance of the semi-conservative hypothesis. On the contrary, a series of additional experiments aimed at testing auxiliary assumptions strongly indicates that simplicity itself was not taken to provide any substantial justification. Philip Hanawalt ([2004]) attributes the preference for the semi-conservative hypothesis to previous studies suggesting that each chromosome inherits one parental chromatid (Taylor et al. [1957]). However, the detection technique had insufficient resolution to discriminate between a partially dispersive and a fully semi-conservative mode replication, or to conclusively demonstrate that DNA (rather than proteins or other molecular components) were labelled in the experiment. If there is such a historical connection, then it is quite plausible that the unsolved issues of Taylor and colleagues' initial study was what motivated Meselson and Stahl to develop higher resolution techniques, thus engaging in what James Marcum ([2007]) calls an ‘experimental series’, whereby each experiment raises research questions, anomalies, data interpretation issues, or experimental difficulties that are then addressed by subsequent experiments. A second concern hinges on the fact that Meselson and Stahl did make further assumptions about what happened during the experiment, thus undermining the notion of simplicity on which Weber relies. Among other things, they assumed that the bands in the CsCl gradient consisted of whole linear DNA. The point can be further pressed given that this assumption was, in fact, false. DNA was indeed fragmented during the experiment, as proposed by Cavalieri and colleagues, but the fragments did not reassociate. Weber ([2009], p. 29) attempts to diffuse the difficulty by arguing that this false assumption didn’t make any difference for the interpretation of the results, because isopycnic centrifugation separates molecules by density not length. However, it is not the length of the fragments that causes the problem here. It was later shown that bacterial chromosomes are circular, not linear as assumed by Meselson and Stahl. Parent–daughter circular DNA molecules remain temporarily interlocked (concatenated), thus altering the proportion of hybrid DNA. Furthermore, in many experimental set-ups, linear, circular, and supercoiled DNAs have different densities, and therefore appear as distinct bands instead of the single band associated with linear DNA (Ts'o [1974]). 3.3 The fragility of the experimental results The issue of circular topologies demonstrates that the apparent simplicity of auxiliary assumptions does not guarantee that these assumptions are true, which is what is ultimately required in order to circumvent under-determination from confirmation holism. The issue also illustrates another complication. Fragmentation—which in the Meselson–Stahl experimental design was the unplanned side effect of the injection of DNA samples through a syringe—turned out to be essential for the success of the experiment. The flip side of this happy accident is that the experiment could just as easily been inconclusive. In the absence of thorough fragmentation, an unpredicted and less clean-cut pattern would have been observed due to the presence of many new DNA species, including concatenated parent–daughter DNA molecules, circular and supercoiled species, as well as more exotic bits such as replication forks (Davis [2004]; Hanawalt [2004]). This does not mean that researchers would not have learned something about the distributions of atoms from parent to daughter DNA, or that they would not have figured out the source of the problem eventually. Nevertheless, the episode demonstrates the fragility of the experimental design. Slight and seemingly irrelevant variations of the experimental design would have resulted in a failure to replicate the experimental results of Meselson and Stahl. In fact, many similar experiments did not replicate the results, but by that time much more was known about DNA topology, synthesis, and replication to make sense of the divergent results. The implication is that the extent to which experimental results robustly support a hypothesis depends in large measure on the ability to identify and test auxiliary assumptions. 3.4 The problematic interpretation of falsifying results A similar set of difficulties face the interpretation of the falsifying results. It is certainly true that the Meselson–Stahl and satellite experiments did not support the conservative and dispersive hypotheses. Nevertheless, these experiments were geared towards the detection of large amounts of DNA. The experiments conclusively favoured the semi-conservative hypothesis in the sense that DNA structures compatible with a semi-conservative mode of replication were detected. The non-detection of other kinds of structures was less conclusive, and this could have been attributed either to the fact that such structures do not exist or to the fact that they are too rare to be picked up by these experiments. The converse complication is also relevant. If the limited sensitivity of an experimental set-up entails an uncertainty about the correct interpretation of negative results, the detection of rare molecular structures is plagued by uncertainties about the biological relevance of these structures.4 The fact that large amounts of DNA structures compatible with the semi-conservative hypothesis were detected in dividing bacteria was taken as a clear indication that these structures are involved in DNA replication. This greatly simplified the interpretation of the experimental results, since there was no need to demonstrate the biological relevance of the detected DNA structures. That is, there was no need to show that the results were not experimental artefacts, and that the detected structures occur under physiological conditions and are specifically involved in DNA replication (as opposed to other biological activities, such as recombination and DNA repair). Yet such fortunate circumstances are quite exceptional in biology. Meselson and Stahl came close to facing a biological relevance problem when they considered the possibility that high recombination rates may complicate the interpretation of their experimental results by producing dispersive-like structures, which in turn would require further experiments to disentangle the relative contributions of replication and recombination. This realization played a role in their decision to work with Escherichia coli, rather than phages or eukaryotic cells, which proved to be an inspired choice (Holmes [2001]). Both difficulties are nonetheless illustrated by the fact that subsequent studies showed that recombination does in fact occur in E. coli and, as a result, dispersive-like DNA is produced in these cells. Furthermore, these DNA structures were discovered using techniques and an overall experimental design similar to that employed by Meselson and Stahl (Pettijohn and Hanawalt [1964]). To make the story even more complicated, ‘bacterial recombination’ turned out to be the result of a DNA repair mechanism involving DNA synthesis (rather than swapping of pre-existing strands).5 These problems target an important aspect of the crucial experiment account, namely, the claim that the Meselson–Stahl experiment provided conclusive evidence against the rival conservative and dispersive hypotheses. The experiment did not definitively rule out these hypotheses in the case of E. coli and it certainly did not rule them out simpliciter. Evidence suggesting that it is unlikely that DNA is ever replicated via alternate mechanisms came much later, and is ultimately dependent on comparative studies showing that most of the DNA replication machinery is highly conserved. However, to this day no one can guarantee that alternate mechanisms are impossible or non-existent. 3.5 Not all possible hypotheses were considered and tested One argument for the crucial experiment account is that the number of possible mechanisms is restricted to a manageable number because any good explanation has to take into account topological constraints imposed by the chemical structure of the genetic material (Roush [2005], p. 15). In more general terms, it seems reasonable to assume that as the number of empirical and theoretical constraints increases, the number of explanations capable of satisfying them is bound to decrease—in some cases, down to only a handful of possibilities. However, just because it is possible to enumerate all possibilities does not mean that it is possible to design a crucial experiment capable of discriminating among all candidates simultaneously. Even if the total number of possibilities is relatively small, the required experimental design may still be exceedingly complex, to the point where practical difficulties outweigh the potential payoffs of a crucial experiment approach. In the case of DNA replication, there were more possibilities than those tested by the Meselson–Stahl experiment, both in terms of hypotheses about the distributions of atoms from parent to daughter DNA, and in terms of topological configurations. With respect to the latter, circular and supercoiled configurations were not taken into account. With respect to the former, it is important to keep in mind that all three hypotheses considered by Meselson and Stahl shared the assumption that multi-nucleotide stretches of the parental DNA strands are passed on to the progeny. They differed in that the semi-conservative hypothesis postulated that parental DNA is preserved as whole single strands, the conservative hypothesis postulated a complete preservation of the parental double-stranded DNA, while the dispersive hypothesis postulated that multi-nucleotide fragments of single-stranded parental DNA are preserved. What was neglected was the possibility that parental DNA is degraded and the resulting pieces are used to synthesize new DNA. This would have opened the door to a completely new type of replication mechanisms that relied on non-DNA molecular structures as intermediary templates. Furthermore, within the general class of semi-conservative mechanisms, it was not necessarily the case that both parental strands serve as a template for the synthesis of new strands. Alternative, ‘master strand’ hypotheses proposed that only one parental strand is used as a template for the synthesis of both daughter strands, while the other is degraded (Kubitschek and Henderson [1966]). A crucial experiment capable of simultaneously differentiating between all of these possibilities would have required that each hypothesis makes a distinct prediction. It seems doubtful that there are any tangible benefits to compensate for such a difficult-to-design and unlikely-to-succeed experiment. 3.6 The tested hypotheses were not mutually exclusive The number of candidate hypotheses is not the only difficulty standing in the way of a crucial experiment. A second complication arises when more than one hypothesis is true. This is far from trivial. A modest increase in the number of candidate hypotheses results in a combinatorial increase in the number of possible outcomes, while effectively invalidating any inferences about the truth or falsity of a given hypothesis from the evidence for the truth or falsity of alternative hypotheses. In order to better understand the issue, it is useful to consider the classical example of the corpuscular and wave theories of light (Duhem [1906], pp. 305–11). Since each theory entails different predictions about the speed of light in different media, it seemed possible to select the explanation that accounted for the observed results and at the same time to eliminate the one that failed to do so, as attempted by Foucault in his famous 1850 experiment. Duhem explicitly assumed that behind the two hypotheses tested by Foucault were two distinct models of light, each built on different premises (laws of motion applied to corpuscles versus the model of a vibrating object), contradictory assumptions about key aspects of reality (light has mass under corpuscular treatment, but has no mass if treated as a wave), and postulating seemingly irreconcilable ontologies (light either consists of corpuscles or is a wave travelling through a medium). These incompatibilities justified the mutual exclusivity of the wave and corpuscular theories of light. Hence, it seemed reasonable to assume that there were only two possible experimental outcomes, and that evidence for one theory automatically counted as evidence against its rival, while lack of evidence for one theory positively supported the other. In more general terms, if candidate hypotheses are mutually exclusive, this means, first, that the number of possible outcomes is significantly reduced because it cannot be the case that more than one hypothesis is true. Second, the interpretation of the results can benefit from an epistemic leveraging whereby the same evidence supports one hypothesis while providing reasons to reject other candidates. Granted, the effectiveness of such leveraging depends in no small measure on whether all the possibilities have been considered (van Fraassen [1989], p. 143; Duhem [1906], p. 311). However, in as much as possible hypotheses are mutually exclusive, scientists may still rely on a weaker form of disjunctive elimination compatible with probabilistic and abductive approaches to confirmation: failure to confirm one hypothesis is expected to increase the probability that the remaining conceived or unconceived alternatives are correct, while the corroboration of a hypothesis is expected to decrease the probability that other hypotheses are correct. Unfortunately, in the case of DNA replication, there were no prior reasons to believe that the three tested hypotheses were mutually exclusive. Figuring out possible distributions of atoms and topological configurations was just the physical chemistry side of the problem. However, there was still the biology part to be factored in: just because there is a limited number of possibilities in terms of replication mechanisms, this does not entail that any given organism cannot use various combinations of these mechanisms, nor does it guarantee that all living organisms will use the same mechanisms. The issue is obvious in Meselson and Stahl’s discussion of their inconclusive results with salmon sperm DNA. Facing a failure to replicate similar results with eukaryotic DNA, they speculated not only about the possibility that the mechanisms of DNA replication may vary significantly between prokaryotes and eukaryotes, but also about potential structural differences in the genetic material itself.6 The upshot of these considerations is that the positive data for the semi-conservative explanation generated in the Meselon-Stahl experiment was insufficient for a conclusive refutation of the conservative and dispersive explanations. Rather, the experiment only demonstrated that the semi-conservative hypothesis accounts for the observed results within the limits of detection of a particular experimental set-up, involving a particular organism. Beyond these limits, or in different organisms, alternate hypotheses could have still turned out to be true. The above is just an illustration of a common, yet often overlooked, obstacle limiting the possibility of designing and conducting crucial experiments in biology. The fact that a mechanism produces a certain phenomenon in one biological system offers no firm guarantees that the same or similar mechanisms produce the same or similar phenomena in other biological systems (Bechtel [2009]; Baetu [2016a]). Subtle and not so subtle differences are omnipresent, from variations in the highly conserved mechanisms that are shared by all living things (for example, slight variations of the genetic code, or the more significant differences between prokaryotic and eukaryotic genome expression mechanisms, which make possible the use of antibiotics), to differences in the mechanisms underpinning similar biological functions (for example, acquired immunity is a shared characteristic of all jawed vertebrates, yet the mechanisms underpinning it vary across species), to differences between individuals of the same species (for example, AIDS pathogenesis in humans is variable due, in part, to resistance mediated by truncated receptors in human cells and the presence or absence of specific mechanisms of defence). The limited potential for generalizability of mechanistic explanations is compounded by a second problem, namely, that even within the same biological system, the same phenomenon may be simultaneously generated by more than one mechanism, with some mechanisms having a higher biological relevance in one situation, but not another. Again, examples abound: Selection of the same phenotype may occur by means of different mechanisms in different populations of the same species; there is more than one DNA repair mechanism and more than one signal transduction pathway leading to the expression of a particular gene product; and more than one chemical mechanism may underlie any given biochemical reaction and any given chemical reaction may be catalysed by different enzymes with distinct sequences, structures, and modes of operation. Redundancy in and duplications of the components of mechanisms are also common. For example, there are several DNA and RNA polymerases in both prokaryotes and eukaryotes, all of which perform the same basic polymerase function, but in the context of different mechanisms (DNA repair versus replication) and substrates (ribosomal RNA, mitochondrial versus nuclear genes). The heterogeneity of biological systems hampers the ability to make inferences about the probable truth or falsity of one hypothesis given evidence supporting the truth or falsity of rival hypotheses. Evidence supporting a particular mechanistic hypothesis does not automatically demonstrate that the mechanism in question is the only possible mechanism, the mechanism uniquely responsible for producing a phenomenon in biological system, or the most prevalent mechanism in a physiologically relevant context. The evidence only supports the claim that the mechanism explains a phenomenon in the context of a specific experimental set-up. 4 The Historical and Scientific Significance of the Meselson–Stahl Experiment 4.1 The challenge for the crucial experiment account Given the above complications and difficulties, the challenge for the crucial experiment account is to show that the interpretation of the experimental results we now take for granted was justified in 1958, when Meselson and Stahl published their results. It might be argued that the additional experiments testing auxiliary assumptions are extensions of the Meselson–Stahl experiment aimed at enhancing the original experimental design in order to eliminate errors; variations of this argument can be found in Mayo’s ([1996]; Mayo and Spanos [2009]) more modest sense of error-statistical account of scientific reasoning, and in Marcum’s ([2007]) more ambitious sense of an experimental series, culminating in the elucidation of the mechanisms of DNA replication. The proposal is perfectly legitimate. After all, it is often the case that an experiment builds upon previous experiments, so that the results of the former are interpreted in light of previous experimental results. The Meselson–Stahl experiment is no exception to this rule. The 1958 paper includes data from the core experiment described in Section 2, plus a series of preparatory and auxiliary experiments required for the interpretation of the main experiment's results. Thus understood, the experiment extended over a decade or more. However, the crucial experiment account attributes all or most of the impact of the whole series of experiments to a single set of experimental results published in 1958. This kind of discrepancy led Lakatos ([1970]) and Hacking ([1983]) to argue that crucial experiment accounts are the result of retrospective reconstructions that may explain the importance we attribute to an experimental finding today, but fail to account for the significance of the finding at the time of publication. In response, Weber ([2009]) argues that there are sometimes reasons other than experimental confirmation to accept some assumptions as likely to be correct. As discussed in Section 3.2, his proposal hinges on the notion that among a given set of explanations that satisfy the relevant empirical and theoretical constraints, some explanations will be preferable to others because of their epistemic virtues, such as simplicity. My concern is that even though such epistemic virtues guide scientific research, they are not adequate surrogates for epistemic justification. If anything, the case study demonstrates that simplicity can favour false assumptions. 4.2 A mechanistic perspective on the experiment In the remaining sections of this article, I will attempt to provide an account of the historical importance of the Meselson–Stahl experiment despite an incertitude about many of the background assumptions required for the interpretation of the results and the overall fragility of the experimental circumstances. This account takes at face value the ‘official’ assessment of the significance of the experimental results, as stated by Meselson and Stahl ([1958], p. 681) in the concluding lines of their paper. Instead of making claims about how the semi-conservative explanation was favoured against its rivals, they carefully describe the experiment in neutral terms, as providing evidence about the distribution of atoms in newly synthesized DNA, treating this evidence as one additional piece of information about the molecular mechanisms underpinning DNA replication in E. coli: The results presented here give a detailed answer to the question of this distribution [of parental atoms among progeny molecules] and simultaneously direct our attention to other problems whose solution must be the next step in progress toward a complete understanding of the molecular basis of DNA duplication. What are the molecular structures of the subunits of E. coli DNA which are passed on intact to each daughter molecule? What is the relationship of these subunits to each other in a DNA molecule? What is the mechanism of the synthesis and dissociation of the subunits in vivo? ([1958], p. 681) Two important points are explicitly made in the above quotation. First, the experiment was part of a broader research project aiming to provide a detailed, step-by-step description of the mechanism or mechanisms underpinning DNA replication (Figure 2, top).7 From the perspective of biochemistry and molecular biology, the ultimate goal was not to determine if DNA replication is semi-conservative, conservative, or dispersive. These terms refer to preliminary mechanism sketches, as understood in (Machamer et al. [2000])—that is, to incomplete descriptions of putative mechanisms. The immediate debate was about the chemical reactions involved in DNA replication. The ultimate goal was an elucidation of the mechanisms of DNA replication. The three main hypotheses considered in the scientific literature at the time had direct implications for the reactants involved in DNA synthesis, postulating a role for parental DNA as a direct template for the synthesis of new DNA. They disagreed about how it would act as a template, namely, whether it would act as double-stranded DNA, single-stranded DNA, or fragments of single-stranded DNA (Figure 2, middle). Knowledge about how parent DNA nitrogen atoms are redistributed in the progeny DNA (which proportion of them, and to which of the two strands of the progeny DNA) was meant to provide the first clues about the chemical reactions involved in DNA replication. In turn, information about these chemical reactions was needed to gain some insights into the chemical building blocks, metabolic requirements, and enzymatic machinery required to carry out the reactions. These details were essential to the elucidation of the mechanism or mechanisms of DNA replication (Figure 2, bottom). Figure 2. View largeDownload slide Hypotheses guiding research and the subsequent elucidation of the mechanistic details. Figure 2. View largeDownload slide Hypotheses guiding research and the subsequent elucidation of the mechanistic details. From the perspective of a mechanistic project, positive or corroborating data are much more valuable than negative or falsifying data. Given that mechanistic explanations are seldom mutually exclusive (Section 3.6), showing how a phenomenon is not produced does not automatically tell us that the phenomenon cannot be produced by that mechanism in some other experimental set-up. Moreover, it usually doesn’t reveal anything about how the phenomenon is actually produced. On the other hand, positive evidence for one, two, or all three hypotheses considered by Meselson and Stahl would have provided data needed to elucidate the biochemistry of DNA replication. As argued earlier, Meselson and Stahl relied on a semi-quantitative detection technique that did not demonstrate that replication is exclusively semi-conservative (Section 3.4). It is interesting to note that neither Meselson and Stahl nor other research groups, including rivals, felt the need to conduct more sensitive experiments aimed at demonstrating that DNA is never replicated via a conservative or dispersive mechanism. I take this to be an indication that the relevant result was not the definitive rejection of alternate hypotheses, but the positive evidence that the genomic material is replicated in biologically significant proportions via a semi-conservative mechanism. That was all that was needed to proceed to the next step of the investigation, namely, the elucidation of the chemistry underlying DNA synthesis and the molecular machinery responsible for replicating DNA. From this perspective, we wouldn’t think any less of the Meselson–Stahl experiment had it turned out that DNA is also replicated via a dispersive, conservative, or some other kind of mechanism. 4.3 The experimental design and its independence from theoretical speculations The second point to note in the conclusion of the 1958 paper is that, technically speaking, the experiment tracked what went into and what came out of a series of chemical reactions. One of the most important scientific achievements of Meselson and Stahl was the development of experimental techniques capable of distinguishing between the inputs and outputs of reactions involving DNA, thus making it possible to investigate not only DNA replication, but many other mechanisms involving nucleic acids, such as DNA repair, recombination, and transcription. More specifically, the experiment provided information about the ‘distribution of parental atoms among progeny molecules’ by comparing the relative densities of isotope-labelled and -unlabelled DNA species extracted from dividing bacteria. Thus, the experimental results were statements of the form ‘approximately x% of parental N atoms are passed to one or both daughter DNA strands in the nth cycle of replication’. The testing of the three main hypotheses circulating in the literature at the time was incorporated into the experimental design in a second step. Meselson and Stahl took advantage of the fact that for select values of the variables in the general form of the experimental results, the results could match predictions made by these hypotheses. However, an unambiguous one-to-one mapping of experimental results to predictions would obtain only if certain assumptions were correct. The most significant of these was the simplifying assumption that DNA replication proceeds primarily via a single mechanism. This assumption served a very important purpose, namely, it created a disjunction in the space of possible explanations, analogous to that between the corpuscular and wave theories of light in Duhem’s example. Ignoring for a moment the additional foreseen and unforeseen complications that arise from experimental procedures, DNA topologies, and the possibility of additional replication mechanisms, the Meselson–Stahl experiment comes closest to the ideal of a crucial experiment in respect to a rather interesting strategy of ‘holistic confirmation’ (Baetu [2013], [2016a]). The general idea goes as follows: if three possible outcomes are predicted in conjunction with the auxiliary assumption that replication proceeds via only one mechanism, then the fact that one of the three predicted outcomes was observed (as opposed to an unpredicted banding pattern) supports not only the hypothesis that predicted it, but also the auxiliary assumption necessary for its prediction.8 As discussed in Section 3, a number of complications blurred an unambiguous one-to-one mapping of results and predictions, thus mitigating the success of the strategy and undermining the crucial experiment account. Notwithstanding this, holistic confirmation did pay off in the sense that it succeeded in establishing a link between theoretically motivated hypotheses about DNA replication and the data provided by a labelling experiment. This link was eventually strengthened by additional experiments, until the gap between the results of the Meselson–Stahl experiment and the prediction of semi-conservative hypothesis was finally bridged. The crucial experiment account focuses on the link between the results of the Meselson–Stahl experiment and previously proposed hypotheses about DNA replication. In doing so, it emphasizes a ‘top-down’ scientific methodology whereby researchers begin by theorizing, then derive predictions in order to test their hypotheses. It is not my intention to deny the importance of this link for the overall mechanistic project. Rather, my complaint is that the crucial experiment account downplays the fragility of the top-down link and its dependence on many unproven assumptions, while obscuring the fact that the experiment would have produced data about the biochemistry of DNA replication independently of the testing of any pre-existing hypotheses about DNA replication. In parallel with questions about what predictions would be entailed by theoretical speculations, Meselson and Stahl engaged another, more general and robust ‘bottom-up’ experimental project: they asked whether it would be possible to distinguish experimentally between the inputs and outputs of the chemical reactions underlying DNA replication. As pointed out earlier, they succeeded in designing an experiment yielding results of the form ‘approximately x% of parental N atoms are passed to one or both daughter DNA strands in the nth cycle of replication’. Such statements already amounted to meaningful information about the biochemistry of DNA replication. Thus, the scope of the experiment was much wider than the testing of the three main hypotheses available at the time, and it could have succeeded in revealing something about the reactants involved in DNA replication, even if none of these hypotheses had turned out to be true. As for the scope of the general experimental design, it extended even further. Not only were higher resolutions versions of the same experimental design used to test some of the auxiliary assumptions required to conclusively support the semi-conservative hypothesis, they were also used to investigate other mechanisms involving nucleic acids (Davis [2004]; Hanawalt [2004]). 4.4 The advantages of a bottom-up mechanistic account If my reconstruction of the experimental design is correct, then it is more accurate to describe the Meselson–Stahl experiment as generating data about the biochemistry of DNA replication. The data were compatible with, and in conjunction with other experiments conclusively supported, the semi-conservative hypothesis put forward by Watson and Crick. This account offers some important advantages over the crucial experiment account. First, it can explain the historical and scientific significance of the Meselson–Stahl experiment without having to argue that the assumptions required by the crucial experiment account were somehow justified in 1958, despite the fact that they only later received experimental support. The significance of the experiment lies primarily in the techniques it employed and their discovery potential, as is often emphasized in scientific appraisals of the experiment. Second, my account has the advantage of aligning the Meselson–Stahl experiment with the general practice of biochemistry and molecular biology, instead of treating it as a peculiar experiment more akin to those conducted in physics. A discovery process involving a progression from the description of a phenomenon to an initial proposal of a mechanistic sketch, and then to the elucidation of the missing mechanistic details, as depicted in Figure 2, is characteristic of research in the life sciences (Bechtel [2006]; Darden [2006]; Craver [2007]; Bechtel and Richardson [2010]; Craver and Darden [2013]). Third, this approach takes into account the end-point targeted by the scientists involved in the debate. The crucial experiment account places an undue emphasis on the proposal and testing of mechanistic sketches, while obscuring both the experimental achievements and the fact that the debate was about the biochemistry of DNA synthesis. Finally, a bottom-up mechanistic account accurately matches the interpretation of the experimental results Meselson and Stahl themselves felt confident and justified in presenting to the scientific community. 5 Conclusion There are undoubtedly limitations to what can be concluded about the role of crucial experiments in science at large based on the particular case of the Meselson–Stahl experiment. I certainly do not mean to suggest that there are no examples of reasonably tight crucial experiments or that such experiments cannot be conducted as a matter of principle. On the contrary, what emerged from my analysis is that under-determination issues can be kept under control. The analysis also revealed that certain factors are likely to facilitate or obfuscate attempts to design and conduct crucial experiments. One such factor is the presence (or absence) of theoretical rationales justifying the partitioning of the space of possible explanations into mutually exclusive scenarios. Such a partitioning is expected to have a rather sizeable positive (or negative) impact in terms of pruning down the number of possible experimental outcomes to a minimum. It is also expected to provide some measure of disjunctive leveraging, whereby evidence for one hypothesis counts as evidence against its rivals, while lack of evidence for rivals counts as evidence for the remaining hypotheses. Unfortunately, research in biological sciences is more likely to fall under the negative scenario, given that mechanistic explanations are not mutually exclusive. A second factor hinges on the possibility of harnessing positive results in order to holistically confirm a hypothesis along with the auxiliary assumptions that are jointly required to make a correct prediction. The Meselson–Stahl experiment demonstrates that such harnessing can be put to good use, although some caveats are easily foreseeable. The strategy is unlikely to work when there are a large number of auxiliary assumptions, and it is likely to fail if distinct sets of auxiliary assumptions yield the same prediction in conjunction with the tested hypothesis. Finally, crucial experiments are associated with a top-down, theory-guiding-experiment methodological approach. However, biological mechanisms are often elucidated in a bottom-up fashion, by building causal networks and then possible mechanisms from experimental results about causally relevant factors (Craver [2007]; Baetu [2012], [2016b]; Woodward [2013]). In such a scenario, there is neither much opportunity nor any genuine need to carry out crucial experiments. Footnotes 1 ‘The results of the present experiment are in exact accord with the expectations of the Watson–Crick model for DNA duplication. However, it must be emphasized that it has not been shown that the molecular subunits found in the present experiment are single polynucleotide chains or even that the DNA molecules studied here correspond to single DNA molecules possessing the structure proposed by Watson and Crick’ (Meselson and Stahl [1958], p. 678). 2 Delbrück ([1954]) also attempted to provide a non-empirical justification based on topological considerations. He correctly predicted the supercoiling of the unwinding DNA helix as it is replicated and argued that a dispersive mechanism of DNA replication is necessary in order to release the building tension in the replicating DNA—postulating, among other things, the existence of ‘nicks’ in the DNA that allow it to untwist. In the end, Delbrück’s dispersive mechanism was abandoned, yet key elements of his account were incorporated in the mechanism of DNA replication as we understand it today, most notably the coiling of the DNA during replication and the cleavage of DNA, as well as other means of releasing the building coiling tension (see details in Figure 2, bottom). 3 More precisely, Weber ([2009], p. 38) frames simplicity in terms of an absence of mechanisms other than the ones postulated by the hypothesis in question: ‘alternative mechanisms would require add-on mechanisms or “epicycles” in order to explain the Meselson–Stahl data’. 4 In such cases, researchers rely on experiments demonstrating that interventions on molecular structures have an effect on biological activities (Baetu [2012]; Craver [2007]; Woodward [2013]). Perhaps the best known varieties of this kind of experiment are the ‘knock-out’ and the ‘transgenic’ experiments, whereby the elimination or addition of a particular molecular component results in a loss or gain of biological activity. 5 Experiments relying on different techniques and experimental set-ups also revealed the existence of a variety of inter- and intra-molecular DNA secondary structures, including quadruplex DNA (Keniry [2001]). The physiological relevance of quadruplex DNA is still debated, although it may play a role in regulating and facilitating DNA replication (Bochman et al. [2012]). 6 ‘On the one hand, if we assume that salmon DNA contains subunits analogous to those found in E. coli DNA, then we must suppose that the subunits of salmon DNA are bound together more tightly than those of the bacterial DNA. On the other hand, if we assume that the molecules of salmon DNA do not contain these subunits, then we must concede that the bacterial DNA molecule is a more complex structure than is the molecule of salmon DNA. The latter interpretation challenges the sufficiency of the Watson–Crick DNA model to explain the observed distribution of parental nitrogen atoms among progeny molecules’ (Meselson and Stahl [1958], p. 681). 7 It has been argued that some of the most successful explanations in the life sciences amount to descriptions of mechanisms (Bechtel [2006], [2008]; Craver [2007]; Darden [2006]; Wimsatt [1972]). Machamer et al. ([2000], p. 3) define mechanisms as ‘entities and activities organized such that they are productive of regular changes from start or set-up to finish or termination conditions’. 8 A more ambitious version manages the holistic confirmation of two auxiliary assumptions: if three possible outcomes are predicted, in conjunction with the auxiliary assumptions that parental DNA is not degraded during replication and that replication proceeds via only one mechanism, then the fact that a semi-conservative outcome was observed supported not only the hypothesis that predicted it, but also the two auxiliary assumptions necessary for the prediction. In contrast, if a dispersive pattern had been observed, then the results would have been inconclusive since it was possible that the banding pattern was produced by dispersive replication and/or the alternate mechanisms involving the degradation of the parental DNA. Acknowledgements I would like to thank Richard Dawid, Laura Nuño de la Rosa, Dan Nicholson, Katinka Quintelier, as well as two anonymous reviewers for helpful discussion and comments on earlier drafts. This work was in part supported by a generous fellowship from the Konrad Lorenz Institute for Evolution and Cognition Research. References Baetu T. M. [ 2012 ]: ‘ Filling in the Mechanistic Details: Two-Variable Experiments as Tests for Constitutive Relevance ’, European Journal for Philosophy of Science , 2 , pp. 337 – 53 . Google Scholar CrossRef Search ADS Baetu T. M. [ 2013 ]: ‘ Chance, Experimental Reproducibility, and Mechanistic Regularity ’, International Studies in History and Philosophy of Science , 27 , pp. 255 – 73 . Baetu T. M. [ 2016a ]: ‘ The “Big Picture”: The Problem of Extrapolation in Basic Research ’, British Journal for the Philosophy of Science , 67 , pp. 941 – 64 . Google Scholar CrossRef Search ADS Baetu T. M. [ 2016b ]: ‘ From Interventions to Mechanistic Explanations ’, Synthese , 193 , pp. 3311 – 27 . Google Scholar CrossRef Search ADS Baldwin R. L. , Shooter E. M. [ 1963 ]: ‘ The Alkaline Transition of BU-Containing DNA and Its Bearing on the Replication of DNA ’, Journal of Molecular Biology , 7 , pp. 511 – 26 . Google Scholar CrossRef Search ADS PubMed Bechtel W. [ 2006 ]: Discovering Cell Mechanisms: The Creation of Modern Cell Biology , Cambridge : Cambridge University Press . Bechtel W. [ 2008 ]: Mental Mechanisms: Philosophical Perspectives on Cognitive Neuroscience , New York : Routledge . Bechtel W. [ 2009 ]: ‘ Generalization and Discovery by Assuming Conserved Mechanisms: Cross-Species Research on Circadian Oscillators ’, Philosophy of Science , 76 , pp. 762 – 73 . Google Scholar CrossRef Search ADS Bechtel W. , Richardson R. [ 2010 ]: Discovering Complexity: Decomposition and Localization as Strategies in Scientific Research , Cambridge, MA : MIT Press . Bloch D. P. [ 1955 ]: ‘ A Possible Mechanism for the Replication of the Helical Structure of Desoxyribonucleic Acid ’, Proceedings of the National Academy of Science , 41 , pp. 1058 – 64 . Google Scholar CrossRef Search ADS Bochman M. L. , Paeschke K. , Zakian V. A. [ 2012 ]: ‘ DNA Secondary Structures: Stability and Function of G-Quadruplex Structures ’, Nature Reviews Genetics , 13 , pp. 770 – 80 . Google Scholar CrossRef Search ADS PubMed Cavalieri L. F. , Rosenberg B. H. , Deutsch J. F. [ 1959 ]: ‘ The Subunit of Deoxyribonucleic Acid ’, Biochemical and Biophysical Research Communications , 1 , pp. 124 – 8 . Google Scholar CrossRef Search ADS Craver C. [ 2007 ]: Explaining the Brain: Mechanisms and the Mosaic Unity of Neuroscience , Oxford : Oxford University Press . Craver C. , Darden L. [ 2013 ]: In Search of Biological Mechanisms: Discoveries across the Life Sciences , Chicago, IL : University of Chicago Press . Google Scholar CrossRef Search ADS Darden L. [ 2006 ]: Reasoning in Biological Discoveries: Essays on Mechanisms, Interfield Relations, and Anomaly Resolution , Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Davis T. H. [ 2004 ]: ‘ Meselson and Stahl: The Art of DNA Replication ’, Proceedings of the National Academy of Science , 101 , pp. 17895 – 96 . Google Scholar CrossRef Search ADS Delbrück M. [ 1954 ]: ‘ On the Replication of Deoxyribonucleic Acid ’, Proceedings of the National Academy of Science , 40 , pp. 783 – 8 . Google Scholar CrossRef Search ADS Duhem P. [ 1906 ]: The Aim and Structure of Physical Theory , Princeton, NJ : Princeton University Press . Franklin A. [ 2007 ]. ‘The Role of Experiments in the Natural Sciences: Examples from Physics and Biology’, in Kuipers T. (ed.), General Philosophy of Science: Focal Issues , Amsterdam : Elsevier , pp. 275 – 302 . Hacking I. [ 1983 ]: Representing and Intervening , Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Hanawalt P. C. [ 2004 ]: ‘ Density Matters: The Semiconservative Replication of DNA ’, Proceedings of the National Academy of Science , 101 , pp. 17889 – 94 . Google Scholar CrossRef Search ADS Holmes F. L. [ 2001 ]: Meselson, Stahl, and the Replication of DNA: A History of the Most Beautiful Experiment in Biology , New Haven, CT : Yale University Press . Google Scholar CrossRef Search ADS Keniry M. A. [ 2001 ]: ‘ Quadruplex Structures in Nucleic Acids ’, Biopolymers , 56 , pp. 123 – 46 . Google Scholar CrossRef Search ADS Kubitschek H. E. , Henderson T. R. [ 1966 ]: ‘ DNA Replication ’, Proceedings of the National Academy of Science , 55 , pp. 512 – 19 . Google Scholar CrossRef Search ADS Lakatos I. [ 1970 ]: ‘Falsification and the Methodology of Scientific Research Programmes’, in Lakatos I. , Musgrave A. (eds), Criticism and the Growth of Knowledge , Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Machamer P. , Darden L. , Craver C. [ 2000 ]: ‘ Thinking about Mechanisms ’, Philosophy of Science , 67 , pp. 1 – 25 . Google Scholar CrossRef Search ADS Marcum J. A. [ 2007 ]: ‘ Experimental Series and the Justification of Temin’s DNA Provirus Hypothesis ’, Synthese , 154 , pp. 259 – 92 . Google Scholar CrossRef Search ADS Mayo D. G. [ 1996 ]: Error and the Growth of Experimental Knowledge , Chicago, IL : University of Chicago Press . Google Scholar CrossRef Search ADS Mayo D. G. , Spanos A. [ 2009 ]: Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science , Cambridge : Cambridge University Press . Google Scholar CrossRef Search ADS Meselson M. , Stahl F. W. [ 1958 ]: ‘ The Replication of DNA in Escherichia coli ’, Proceedings of the National Academy of Science , 44 , pp. 671 – 82 . Google Scholar CrossRef Search ADS Pettijohn D. , Hanawalt P. C. [ 1964 ]: ‘ Evidence for Repair-Replication of Ultraviolet Damaged DNA in Bacteria ’, Journal of Molecular Biology , 9 , pp. 395 – 410 . Google Scholar CrossRef Search ADS PubMed Putnam H. [ 1991 ]: ‘The “Corroboration” of Theories’, in Boyd R. , Gasper P. , Trout J. D. (eds), The Philosophy of Science , Cambridge, MA : MIT Press . Quine W. V. [ 1951 ]: ‘ Two Dogmas of Empiricism ’, Philosophical Review , 60 , pp. 20 – 43 . Google Scholar CrossRef Search ADS Rolfe R. [ 1962 ]: ‘ The Molecular Arrangement of the Conserved Subunits of DNA ’, Journal of Molecular Biology , 4 , pp. 22 – 30 . Google Scholar CrossRef Search ADS PubMed Roush S. [ 2005 ]: Tracking Truth: Knowledge, Evidence, and Science , Oxford : Oxford University Press . Google Scholar CrossRef Search ADS Taylor J. H. , Woods P. S. , Hughes W. L. [ 1957 ]: ‘ The Organization and Duplication of Chromosomes as Revealed by Autoradiographic Studies Using Tritium-Labeled Thymidine ’, Proceedings of the National Academy of Science , 43 , pp. 122 – 7 . Google Scholar CrossRef Search ADS Ts'o P. O. P. [ 1974 ]: Basic Principles in Nucleic Acid Chemistry , Volume 2 , London : Academic Press . van Fraassen B. C. [ 1989 ]: Laws and Symmetry , New York : Oxford University Press . Google Scholar CrossRef Search ADS Watson J. D. , Crick F. H. [ 1953 ]: ‘ Genetical Implications of the Structure of Deoxyribonucleic Acid ’, Nature , 171 , pp. 964 – 7 . Google Scholar CrossRef Search ADS PubMed Weber M. [ 2009 ]: ‘ The Crux of Crucial Experiments: Duhem’s Problems and Inference to the Best Explanation ’, British Journal for the Philosophy of Science , 60 , pp. 19 – 49 . Google Scholar CrossRef Search ADS Wimsatt W. C. [ 1972 ]. ‘Complexity and Organization’, in Schaffner K. F. , Cohen R. S. (eds), Proceedings of the Philosophy of Science Association , Dordrecht : Reidel , pp. 67 – 86 . Woodward J. [ 2013 ]: ‘ Mechanistic Explanation: Its Scope and Limits ’, Proceedings of the Aristotelian Society , 87 , pp. 39 – 65 . Google Scholar CrossRef Search ADS © The Author 2017. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com

Journal

The British Journal for the Philosophy of ScienceOxford University Press

Published: Sep 1, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off