The Reality of Jean Perrin's Atoms and Molecules

The Reality of Jean Perrin's Atoms and Molecules Abstract Jean Perrin’s proof in the early-twentieth century of the reality of atoms and molecules is often taken as an exemplary form of robustness reasoning, where an empirical result receives validation if it is generated using multiple experimental approaches. In this article, I describe in detail Perrin’s style of reasoning, and locate both qualitative and quantitative forms of argumentation. Particularly, I argue that his quantitative style of reasoning has mistakenly been viewed as a form of robustness reasoning, whereas I believe it is something different, what I call ‘calibration’. From this perspective, I re-evaluate recent interpretations of Perrin provided by Stathis Psillos, Peter Achinstein, Alan Chalmers, and Bas van Fraassen, all of whom read Perrin as a robustness reasoner, though not necessarily in the same sort of way. I then argue that by viewing Perrin as a ‘calibration’ reasoner we gain a better understanding of why he believes himself to have established the reality of atoms and molecules. To conclude, I provide an alternative and more productive understanding of the basis of the dispute between realists and anti-realists. 1 Introduction 2 Perrin’s Reasoning: The Qualitative Argument 3 Perrin’s Reasoning: The Quantitative Argument 4 Perrin’s Realism 5 Psillos, Achinstein, Chalmers, and van Fraassen on Understanding Perrin 6 Conclusion 1 Introduction Scientific realism is the view that we are justified in believing that the unobservables referred to in our best scientific theories are real. Scientific anti-realism is the view that we are not justified in believing that the unobservables referred to in our best scientific theories are real, but rather only in believing that the postulation of unobservables in these theories is empirically supported. In the philosophy of science a great deal of effort is spent identifying empirical or theoretical strategies that could vindicate a realist attitude towards unobservables. Sometimes it is said that novel predictive success is a sign that a theory referring to unobservables should be interpreted realistically; sometimes the ability of postulated unobservables to provide the ‘best explanation’ for an empirical phenomenon indicates that these unobservables are real. In this article, the realist strategy we are considering is one where empirical results indicate the reality of unobservables if these results are generated by a variety of empirical methodologies. This is the strategy of ‘robustness reasoning’. Does successful robust reasoning provide a strong argument for the reality of an unobservable? This is a large question we cannot fully address here. But there is a famous case in the history of science where robustness reasoning was apparently critical to a scientist’s judgement that he had demonstrated the reality of unobservables: the case of Jean Perrin’s reasoning on behalf of the reality of atoms and molecules. What Perrin did was find the same value for Avogadro’s number using a variety of experimental sources. These sources dealt with such diverse phenomena as the viscosity of gases, Brownian motion, critical opalescence, blackbody radiation, radioactivity, and other things. In reflecting on this convergence of results, Perrin ([1916], pp. 206–7) comments: Our wonder is aroused at the very remarkable agreement found between values derived from the consideration of such widely different phenomena. Seeing that not only is the same magnitude obtained by each method when the conditions under which it is applied are varied as much as possible, but that the numbers thus established also agree among themselves, without discrepancy, for all the methods employed, the real existence of the molecule is given a probability bordering on certainty. For many philosophers and historians of science, this passage is representative of an exemplary form of robustness reasoning that was pivotal early in the twentieth century at establishing beyond doubt the reality of molecules. The views expressed by Henri Poincaré in this regard are typical: The brilliant determinations of the number of atoms computed by Mr. Perrin have completed the triumph of atomism. What makes it all the more convincing are the multiple correspondences between results obtained by entirely different processes […] The atom of the chemist is now a reality. ([1913], p. 90; quoted in Psillos [2011b], p. 355) In this article we attempt to gain some clarity about the nature of Perrin’s reasoning in his Brownian Movement and Molecular Reality ([1910]) and Atoms ([1916]) in which he is led to the conclusion that molecules exist. There are indications in Peter Achinstein ([2001]) and Stathis Psillos ([2011a], [2011b], [2014]) that Perrin is engaged in a different form of reasoning, though neither philosopher is able to fully detach himself from the view that Perrin is reasoning robustly. By comparison Alan Chalmers ([2011]) fully endorses the sort of robustness interpretation suggested by Poincaré. Drawing on insights presented by Achinstein and Psillos, I argue that Perrin’s form of argument is, to begin with, qualitative. In essence, Perrin creates a model of a gas using an emulsion containing small particles exhibiting Brownian motion. If this model is roughly accurate, then just as the density of a gas diminishes exponentially at higher altitudes, so should the density of an emulsion diminish exponentially—which is what we find. Hence, by this qualitative comparison of vertical distributions in a gas and in an emulsion, Perrin concludes that molecules in a gas are real and exhibit Brownian motion just as emulsive granules exist and exhibit Brownian motion. Perrin also uses a quantitative form of argument that focuses on the specific value of N, Avogadro’s number, and what an accurate determination of this number means for the claim that atoms and molecules are real. Psillos and Achinstein interpret this argument as a robustness argument on behalf of an enhanced atomic hypothesis, one that incorporates explicitly a statement of the value of Avogadro’s number. However, as I argue, Perrin’s quantitative argument should not be viewed as a robustness argument but rather as involving a different form of reasoning that I call ‘calibration’, the use of which I explain. Subsequent to this review of the form of Perrin’s reasoning, I turn to the question whether Perrin’s work, alternately construed as I have suggested, provides a basis for realism about atoms and molecules. I argue that it does, in contrast with Bas van Fraassen’s ([2009], p. 23) contention that Perrin’s achievement can be understood as simply providing ‘empirical grounding for all [the] significant parameters’ of a kinetic theory of atoms, and not necessarily as leading to a realist attitude about molecules. Here, my argument for realism about atoms and molecules, as based on Perrin’s work, is distinguishable from the sort of realist approaches advanced by Psillos, Achinstein, and Chalmers. For each of these three, realism about molecules (à la Perrin) is justified on the grounds that Perrin’s experimental strategies are strong enough or compelling enough to provide evidence that compels a belief in the reality of molecules. As I see it, such a ‘strong enough’ argument for realism is ultimately ineffective since anti-realists such as van Fraassen can always legitimately ask for more proof when dealing with unobservables, given the inherent limitations on inductive, ampliative reasoning. By contrast, my argument for realism de-emphasizes the issue of how strong evidence for a theoretical claim needs to be, where strength is interpreted in probabilistic terms. It looks instead to the prospects for a theoretical development of one’s understanding of an unobservable. In short, to be a realist about an unobservable one must have a suitably detailed (and of course, empirically supported) theoretical picture of this entity, a picture that is substantive enough to give one a clear idea about the content of one’s realist convictions. In this sense, Perrin’s contribution was to provide empirical grounding for a detailed theoretical picture of the properties of molecules. As I argue, this detail made a conviction in molecules legitimate, and in the absence of a similarly detailed and confirmed picture of the structure of the world as fundamentally continuous, a realist interpretation of atoms and molecules, one where nature is essentially discontinuous, acquires ‘a probability bordering on certainty’. 2 Perrin’s Reasoning: The Qualitative Argument In the quote above from Perrin’s Atoms, it is claimed that arriving at a univocal value for Avogadro’s number is connected to a belief in the reality of molecules, but the connection here is unclear. Avogadro’s hypothesis is the thesis that ‘equal volumes of different gases, under the same conditions of temperature and pressure, contain equal numbers of molecules’ (Perrin [1916], p. 18). The surprising aspect of Avogadro’s hypothesis is the implication that two gases containing the same number of molecules under similar conditions will exert the same pressure, regardless of the size or mass of the molecules making up the gas. From here we need to standardize the notion of the number of molecules in a gas. This leads to Avogadro’s number, abbreviated N, which for Perrin is the number of molecules in 32 g of oxygen (what Perrin calls a ‘gramme molecule’, or what we call a ‘mole’, of oxygen). By Avogadro’s hypothesis, a mole of any gas exerts the same pressure as a mole of any other gas, under similar temperatures and volumes. So, what’s the relevance of N to the truth or falsity of the claim that atoms and molecules exist? Perrin attempts a number of experimental determinations of N, as noted above. However, one particular method for Perrin stands out as the most reliable: microscopic observations of the vertical displacement of small particles suspended in a liquid (the resultant liquid is called an emulsion), where such particles exhibit Brownian motion. Why does Perrin emphasize this approach? Brownian motion is the continual, random motion of particles suspended in a fluid, analogous (though not identical) to viewing particles of dust suspended in sunlight. Brownian particles move irregularly, unconnected to one another, and during Perrin’s time it was well known that there are no external explanations for these movements, such as from external, environmental vibrations or thermal convection currents. It was hypothesized by Louis-Georges Gouy that Brownian motion could be caused by the irregular motions of the molecules in the fluid (see Perrin [1910], p. 5; referring to Louis-Georges Gouy [1888]). Overall, Perrin ([1910], p. 7) finds Gouy’s explanation convincing, but claims we need a more ‘direct’ proof of molecules: Instead of taking this hypothesis ready made and seeing how it renders account of the Brownian movement, it appears preferable to me to show that, possibly, it is logically suggested by this phenomenon alone, and this is what I propose to try. Again, he says, However seductive the hypothesis may be that finds the origin of the Brownian movement in the agitation of the molecules, it is nevertheless a hypothesis only […] I have attempted […] to subject the question to a definite experimental test that will enable us to verify the molecular hypothesis as a whole. ([1916], p. 88) What Perrin is suggesting is that an inference to the best explanation argument on behalf of the reality of molecules—however ‘seductive’—is not logically direct enough, and he proposes instead the following experimental strategy. To begin, he cites common knowledge that gas in a vertical column is more rarified higher in the column than lower down, and notes that the rate of rarefaction proceeds exponentially according the following equation. Following Perrin ([1916], p. 90), where p is the pressure of a gas at a lower elevation, and p′ is the pressure at a higher elevation, M is the mass of a gram molecule of the gas, g is the acceleration due to gravity, h is the difference in elevation, R is the gas constant, and T the absolute temperature, p′=p(1−((M·g·h)/RT)). (1) Rewriting Equation (1) in terms of the numbers of molecules at the two levels (n and n′), we have: n′=n(1−((N·m·g·h)/RT)), (2) where m is the mass of a molecule of gas and N is Avogadro’s number (from Achinstein [2001], p. 245; Perrin assumes, but never explicitly formulates Equation (2)). Perrin then sets up a model of an atmosphere using uniform emulsions (that is, where the suspended particles are uniform in size), one where the component particles are observed to exhibit Brownian motion. At this stage, Perrin makes a crucial assumption—we call it the ‘kinetic model’—to view the movements of the particulates in an emulsion as though they were large-sized molecules. He says, ‘It appeared to me at first intuitively, that the granules of such an emulsion should distribute themselves as a function of the height in the same manner as the molecules of a gas under the influence of gravity’ ([1910], p. 23), and, in a series of rhetorical questions, Is it not conceivable, therefore, that there may be no limit to the size of the atomic assemblages that obey [the gas] laws? Is it not conceivable that even visible particles might still obey them accurately, so that a granule agitated by the Brownian movement would count neither more nor less than an ordinary molecule with respect to the effect of its impact upon a partition that stops it? In short, is it impossible to suppose that the laws of perfect gases may be applicable even to emulsions composed of visible particles? ([1916], p. 89) On the basis of this model, Perrin ([1916], p. 89) identifies what he calls ‘crucial experiments that should provide a solid experimental basis from which to attack or defend the Kinetic Theory’, the theory that gases are composed of molecules, and that these molecules exhibit Brownian motion. In one such crucial experiment, Perrin suggests that with an emulsion we can calculate theoretically using his kinetic model what the vertical distribution of the emulsive particles should be, analogously to how we can calculate the vertical distribution of molecules in a gas at differing heights. The resultant equation expressing this distribution is, where we are considering the numbers of particles n in an emulsion at a lower elevation as compared to the number n′ at a higher elevation, and where we take into account the buoyancy of the liquid constituting the emulsion by means of the factor (1 – d/D), with d standing for the density of the liquid and D the density of the emulsive particles, n′=n(1−(N·m·g·h(1−d/D)/RT)), (3) where m is the mass of each particle, assumed to be uniform in size, and N is (again) Avogadro’s number (Perrin [1916], p. 93). In general terms, what Equation (3) is telling us is that in emulsions—if the kinetic model is correct—we should observe at increased elevations an exponential decline in (osmotic) pressure, or in the numbers of granules, just as we find such an exponential decrease in pressure, or in the numbers of molecules, with gases. As Perrin ([1910], p. 41) says, ‘if our kinetic theory is exact, this [vertical] distribution [in the emulsion] will change from the time the preparation is left at rest, will attain a limiting state, and in this state the concentration will decrease in an exponential manner as a function of the height’. On the other hand, if we don’t find in emulsions an exponential decline in pressure or in the numbers of particles, then either the kinetic model is mistaken—that is, one should not treat granules in an emulsion as large-sized molecules—, or if the kinetic model is retained, and since the Brownian motion of emulsive particles does not support an exponential decline in the osmotic pressure of an emulsion, then neither does the Brownian motion of molecules support nor explain the exponential decline of gas pressure at higher altitudes. It is worth noting here that Perrin ([1910], p. 24) provides a different equation representing the state of an emulsion (Psillos [2011a], p. 180, [2011b], p. 353, [2014], p. 152, as well as Chalmers [2011]): where W is the mean granular energy, ϕ the volume of each granule, Δ the density of each granule, δ the density of the intergranular liquid, g the gravitational acceleration, and n and n0, respectively, the number of granules per unit volume at two vertical levels separated by height h, we have 2/3Wlog(n0/n)=ϕ(Δ−δ)gh. (4) This equation Perrin calls the ‘equation of distribution of the emulsion’, and of particular note is its utilization of the variable W, the mean kinetic energy of the emulsive particles. As with Equation (3), Equation (4) is telling us—again, if the kinetic model is correct—that at higher altitudes there should be a corresponding exponential decrease in the concentration (or number) of emulsive granules, an exponential decrease that, Perrin ([1910], p. 24, Footnote) says, ‘is a necessary consequence of the equipartition of energy’. He thus recommends not only that we check for an exponential decline in granular density, but also that we calculate the corresponding value of W at the different levels to see whether this value ‘is the same as that which has been approximately assigned to the molecular energy’ ([1910], p. 24), that is, in accordance with equipartition. Perrin’s argument in this context crucially relies on the principle of equipartition. If the principle of equipartition is true, then as Perrin ([1910], p. 19) points out the mean translational kinetic energy of the molecules of a gas or a liquid is entirely dependent on the temperature of the gas or liquid. But if the Brownian motion of granules is entirely due to the impact of the molecules of the surrounding emulsion, then by the principle of equipartition the mean translational kinetic energy of the granules should be the same as the mean translational kinetic energy of the molecules of the emulsion, indeed, of ‘any molecule at the same temperature’ ([1910], p. 44). So if one can measure the mean translational kinetic energy of the granules, and show that it is the same as the mean translational kinetic energy of the molecules of the emulsion, we have generated good evidence that Brownian motion is caused by the impact of the molecules of the emulsion since, if Brownian motion had some other cause, you would not expect a priori that the mean translational kinetic energy of the granules would match the mean translational kinetic energy of the molecules of the emulsion. (Here I am indebted to an anonymous referee who correctly notes the differences in the analytical approaches to the vertical distribution experiments taken by Perrin in his ([1910], [1916]).) This ‘equipartition’ approach to analysing the vertical distribution experiments leads to a different understanding of what crucial experiment Perrin sees himself providing. Rather than focusing on whether the vertical distribution of particles in an emulsion shows the same exponential shape as the vertical distribution of molecules in a gas, we instead determine whether the measured, mean translational kinetic energy of granules in an emulsion is the same as the mean, translational kinetic energy of ‘any molecule at the same temperature’ (Perrin [1910], p. 44). If they are the same, Perrin ([1910], p. 24) asserts that ‘the origins of the Brownian movement will be established’, and we can use our measurements with emulsive granules to provide a means to determine the various molecular magnitudes; conversely, if they are different, ‘the credit of the kinetic [theory] will be weakened and the origin of the Brownian movement remains undiscovered’ ([1910], p. 21). As it turns out, after completing his experiments with emulsions, Perrin finds that the vertical distributions with emulsions do exhibit an exponential decline, and that the translational kinetic energies of the granules and molecules are the same, thus upholding the validity of the kinetic model, to view emulsive granules as large-sized molecules. From here, the nature of Perrin’s reasoning is initially qualitative: he finds the vertical distribution of particles in an emulsion to exhibit an exponential distribution that matches in shape the exponential distribution law found with gases. ‘This [shape] is just what experiment verifies’, he comments ([1910], p. 41)—an ‘exponential diminution is manifest’ ([1910], p. 44; see also [1916], p. 103). But he also has a quantitative result. Having confirmed the kinetic model, Perrin ([1910], p. 44) suggests that Equation (4) provides us with a ‘well-defined value of the granular energy W’, one that does ‘not depend on the emulsion chosen, and will be equal to the mean energy w of any molecule at the same temperature’, as required by the equipartition of energy. ‘Or, what comes to the same’, he suggests, where R is the gas constant and T is the absolute temperature, ‘the value N′ of the expression 3RT/2 W will be independent of the radius and density of the granules studied and will be equal to the expression 3RT/2w, that is, […] to Avogadro's constant N’ ([1910], p. 44). Perrin arrives at a similar conclusion in his ([1916]), using instead Equation (3): since with Equation (3) (but not with Equation (2)) we can experimentally measure all the variables, except for Avogadro’s number N, this allows us to calculate N. For these calculations, Perrin varies somewhat the conditions of the experiment, employing emulsions with (i) different sizes of emulsive grains (from 0.14 to 6 microns), (ii) different intergranular liquids (water, sugary water, glycerol, and water), (iii) different temperatures for the intergranular liquid (−9°C to 60°C), and (iv) different kinds of emulsive grains (gamboge and mastic) (see, for example, Perrin [1910], p. 45, [1916], pp. 103–4). With these varying methods, Perrin arrives at the following value for N ([1910], p. 46, [1916], p. 105, respectively): N=70·1022,or65·1022<N<72·1022 What does Perrin infer from these results? Here, his analysis in ([1910]) diverges from the analysis he gives in ([1916]). In the former he emphasizes the agreement he finds between W (the granular energy) and the corresponding value for w (the molecular energy) at the same temperature. He comments: ‘I do not think this agreement can leave any doubt as to the origin of the Brownian movement’ ([1910], p. 46). By comparison, in his ([1916]), he emphasizes the agreement between the value he calculates for N and the corresponding estimate for N he calculates based on the use of van der Waals’ equation in representing the viscosity of gases, generating a value for N = 62 · 1022 (see Perrin [1916], p. 105). ‘Such decisive agreement’, he says, ‘can leave no doubt as to the origin of the Brownian movement’ ([1916], p. 105). From here, Perrin argues in similar fashion in both ([1910]) and ([1916]) in a key passage effectively highlighted by Psillos ([2011a], pp. 181–2, [2011b], p. 355, [2014], pp. 154–5). Here is the passage, taken from (Perrin [1910], p. 46); a practically identical passage occurs in (Perrin [1916], p. 105): To understand how striking this [agreement] is, it is necessary to reflect that, before this experiment, no one would have dared to assert that the fall of concentration would not be negligible in the minute height of some microns, or that, on the contrary, no one would have dared to assert that all the granules would not finally arrive at the immediate vicinity of the bottom of the vessel. The first hypothesis would lead to the value zero for [N] while the second would lead to the value infinity. That, in the immense interval which a priori seems possible for [N], the number found should fall precisely on a value so near to the value predicted, certainly cannot be considered as the result of chance. This passage contains two arguments, the first drawing upon the qualitative fact noted above, and the second focusing on the quantitative measurement of N. In more detail, the qualitative argument is as follows: With an emulsion, Perrin thinks we would expect, a priori, either that the granules would be evenly distributed throughout fluid (corresponding to N = 0) or that they would all sink to the bottom (corresponding to N = ∞). Both of these options correspond to the hypothesis of continuity since if N = 0, there are no atoms, and if N = ∞, a collection of atoms is indistinguishable from an infinitely divisible fluid. But what we find empirically is that 0 < N < ∞, in other words, that the hypothesis of discontinuity is true—there are discrete atoms whose size is ascertained once we have calculated the value of N. Psillos ([2014], p. 155) recognizes the importance of this qualitative argument: ‘the exponential distribution of the Brownian particles is enough to discredit the hypothesis that matter is continuous’. But note further, as Psillos indicates, the particles in the emulsion exhibit Brownian motion, and if we are granting the kinetic model, this means that the molecules in a gas exhibit Brownian motion as well, producing an exponential distribution. Thus, we are able to explain the Brownian motion of microscopic particles as being due to the (Brownian) movements of molecules, if the analogy between emulsions and gases holds up. We have, as Perrin promised, a more direct proof of the Brownian motion of molecules than the one offered by Léon Gouy, where the hypothesis that molecules exhibit Brownian motion is not much more than an explanatory speculation. At this stage, we might reflect on how far Perrin thinks he has gotten in providing a molecular explanation of Brownian movement. At the very least, the molecular theory of Brownian movement identifies the motion of molecules as the cause of the Brownian movement of microscopic particles. But arguably Perrin shows more than this—that molecules themselves exhibit Brownian movement. If the kinetic model is true—if, as Perrin believes, we should view the granules in emulsions as though they are themselves large-sized molecules—then it follows that molecules themselves exhibit Brownian movement, as the granules are observed to do. Moreover, we have a better, causal explanation of the observed Brownian movement of micro-particles, if molecules themselves exhibit Brownian movement. There is textual evidence that Perrin does go the extra step of ascribing Brownian movement to molecules. For instance, as he comments in his ([1910], p. 46), ‘Brownian movement offers us, on a different scale, [a] faithful picture of the movements possessed, for example, by the molecules of oxygen dissolved in the water of a lake’. Also, after noting with his experiments that ‘molecular movement has not [strictly speaking] been made visible’—that is, granules in emulsions are viewed only as though they are themselves large-sized molecules—Perrin ([1916], p. 105) nevertheless maintains that ‘the Brownian movement is a faithful reflection of [molecular movement], or, better, it is a molecular movement in itself, in the same sense that the infra-red is still light’. In other words, infrared light, though invisible, shares the same properties as visible light, just as molecules, though invisible, share the same properties as visible granules (that is, they exhibit Brownian movement). Overall, by demonstrating that molecules exhibit Brownian movement, as modelled by granules in a uniform emulsion, Perrin believes himself to have provided a more ‘direct’ proof of molecules than what had been suggested by Gouy’s explanatory approach. 3 Perrin’s Reasoning: The Quantitative Argument The quantitative argument, on the other hand, involves taking into account the values for N we saw Perrin calculating above by reference to vertical distributions of granules in emulsions—again, N = 70 · 1022 ([1910], p. 46), or 65 · 1022 < N < 72 · 1022 ([1916], p. 105)—and comparing these values to 62 · 1022, the value of N generated using van der Waals’ equation (see Perrin [1910], p. 18, [1916], p. 105). Recall that Perrin remarks on the surprising convergence of these values. But the weight he puts on the value of N as derived with van der Waals’ equation is somewhat surprising since applying this equation has for him an element of uncertainty. Whereas with the vertical distribution experiments ‘the mean departure [(that is, error)] does not exceed 15%’, he notes that ‘the number given by the equation of Van der Waals does not allow of this degree of accuracy’ (Perrin [1910], p. 46). Indeed, Perrin works extensively on generating values of N using his vertical distribution experiments that he regards as the most accurate: these values are, from (Perrin [1910], p. 48), using gamboge grains with a radius of 0.212 microns, N = 70.5 · 1022, and from (Perrin [1916], p. 107), using (presumably) gamboge grains with a radius of 0.367 microns, N = 68.2 · 1022. This latter number he regards as the most accurate, better than his best ([1910]) number. Thus, the vertical distribution experiments not only serve to establish the reality of molecules by means of a qualitative argument. They also provide a means to generate the most accurate value for N. How do we know that, for Perrin, the vertical distribution experiments provide the most accurate values for N? First of all, as we shall see, these values for N are subsequently cited by Perrin to ‘confirm’ or ‘verify’ the validity of other approaches that generate a value for N utilizing different areas of physical theory, such as the displacement of Brownian particles, the tendency of molecules to diffract sunlight in the upper atmosphere, the capacity of a molecules to carry a charge (qua ion) during the ionization of a gas, and so on. But perhaps more directly, Perrin is clear in his ([1910]) and ([1916]) that these values are preferred. In his ([1910], p. 47), the value for N of 70.5 · 1022 is cited under the section heading, ‘Precise determination of Avogadro’s constant’. In his ([1916], p. 107), the value of N = 68.2 · 1022 is cited under the section heading, ‘Exact determination of the Molecular Magnitudes’, where Perrin comments, ‘by studying emulsions [and generating the value of 68.2 · 1022 for N] we are really able to weigh the atoms and not merely to estimate their weights approximately’. But does using the vertical distribution experiments to derive the preferred value for N create the basis for an improved argument for the reality of molecules? This is, once more, the question we posed at the beginning of the article concerning the relevance of the value of N to the truth or falsity of the claim that atoms and molecules exist. Psillos and Achinstein both think that the most accurate value of N Perrin arrives at—68.2 · 1022—serves to support the atomic hypothesis. How does this work? Psillos and Achinstein define the ‘atomic hypothesis’ in a similar, idiosyncratic way. According to Achinstein ([2001], p. 257), the ‘atomic hypothesis’ states that ‘chemical substances are composed of molecules, the number N of which in a gram molecular weight of any substance is (approximately) 6 × 1023. Psillos ([2011b], p. 187) suggests parenthetically that his definition of the atomic hypothesis is ‘roughly equivalent to Achinstein’s’, and indeed his Bayesian calculations are incomprehensible unless atomism is defined along these lines. Clearly, then, if we define the atomic hypothesis as not only requiring the fundamental discontinuity of matter, but as also including a specific value for N, it follows that generating any other value for N by means of experiment will disconfirm the atomic hypothesis. Conversely, generating a value for N cohering with this value—such as Perrin’s best, experimental value, N = 68.2 · 1022—confirms this atomic hypothesis, and does so uniquely. For Psillos, this is the sense in which Perrin believes himself to have subjected the atomic hypothesis to a crucial test and has made it ‘difficult to deny the objective reality of molecules’ ([1910], p. 89; similar comments are made in Perrin [1916], p. 105). Where Psillos differs from Achinstein is in emphasizing that Perrin has done more than just provide evidence for the atomic hypothesis. Rather, Perrin (for Psillos) has generated a definitive, successful test of the atomic hypothesis (as including a value for N), one that for Perrin’s contemporaries, such as Poincaré, was pivotal in establishing beyond doubt the reality of molecules. So with the quantitative argument we have an understanding of the relevance of N to the claim that molecules are real, as the atomic hypothesis (for Psillos and Achinstein) itself embeds an assumption about the value of N, and this hypothesis is confirmed if the available empirical data attest to this value of N. Moreover, we now see the relevance of robustness reasoning to the atomic hypothesis since, purportedly, it is Perrin’s strategy to validate his experimentally derived value for N by noting how this value is generated by means of a variety of experimental methods, methods dealing with the viscosity of gases, Brownian motion, critical opalescence, blackbody radiation, various radioactive phenomena, and so on. As Perrin ([1916], p. 207) says, because ‘the numbers thus established […] agree among themselves, without discrepancy, for all the methods employed, the real existence of the molecule is given a probability bordering on certainty’. So what is the status of this robustness argument? Does it validate Perrin’s experimentally generated value for N? Psillos in a footnote in his ([2011b]) makes an important observation that, in fact, casts doubt on the value of Perrin’s purported robustness argument for the accuracy of N, without apparently realizing that this observation undermines his own endorsement of the value of robustness reasoning. Wesley Salmon, Psillos notes, had interpreted Perrin as engaging in a form of common cause reasoning in which, to explain the convergence of values for N, one cites the reality of molecules as the common cause (see Salmon [1984]). But there is a problem Psillos thinks with interpreting Perrin in this way since most of the diverse ways of calculating N were known to Perrin prior to his own experiments with emulsions. Thus, such a common cause—that is, robustness reasoning—approach does not, Psillos ([2011b], p. 358) thinks, ‘adequately explain Perrin’s own contribution to the confirmation of the atomic hypothesis’, a contribution that apparently put our confidence in the reality of molecules ‘beyond doubt’. If all Perrin did was add a few more experimental methods that, yet again, converged on the same value for N, it’s not clear why the already numerous convergent methods were not enough, nor why further methods, beyond what Perrin provided, would not also be needed. At this stage, Psillos adds a second criticism of Salmon’s common cause argument for the reality of molecules, which also counts as a critique of robustness reasoning. When one notes the convergences of the values of N and explains this convergence by citing the reality of molecules, or more precisely, the reality of molecules in an amount prescribed by Avogadro’s number, one does not thereby necessarily say anything thereby about what molecules are nor what properties they have. As Psillos ([2011b], p. 358) says, ‘it is not clear in what sense exactly the molecules are the common cause of the agreement perceived in the various calculations of Avogadro’s number’. A variety of causes could be postulated as leading to this convergent set of values for N. N could simply stand as the number of very small, coherent, though divisible components of a substance, perhaps something like emulsive granules though at a sub-microscopic level. Here it would help if we could rely on some pre-experimental, truthful presupposition about what exactly N stands for. But, as van Fraassen ([2009], p. 23) points out, Perrin was working in the context of classical kinetic theory with, from our perspective, a somewhat impoverished view of atomic reality. Referencing Dalton, Perrin ([1916], p. 10) describes atoms as ‘a fixed species of particles, all absolutely identical; these particles pass, without ever becoming subdivided, through the various chemical and physical transformations that we are able to bring about, and [are] indivisible by means of such changes’. Perrin makes similar comments in his 1926 Nobel Lecture. Speaking about ‘the simple substance hydrogen [which] can be regained and which passes, disguised but indestructible, through our various reactions’, he says: Dalton supposed, and this is the essential point of the atomic hypothesis, that this substance is formed by a definite variety of particles which are all identical and which cannot be cut into pieces in the reactions which we can produce, and which for this reason are called atoms. (Perrin [1926], pp. 140–1) To be sure, Perrin has a more nuanced understanding of atoms in which atoms are identified as having internal parts: in his 1926 Nobel Lecture, he remarks that an atom has a discontinuous structure, modelled on a planetary system (Perrin [1926], p. 163). Thus his citation of a Daltonian perspective should not be over-emphasized. The key point to keep in mind, in any event, is that common cause, robustness reasoning, if not simply uninformative about the reality of the entity being argued for, could be positively misleading if it serves to vindicate the wayward, theoretical presuppositions of an experimenter. There is nothing in the sort of common cause, robustness argument that Salmon ascribes to Perrin that guards us against such a potential bias. The problem we are citing with robustness reasoning, that of not providing sufficient information about the nature or properties of inferred entities, is critical where we are dealing with unobservables, such as atoms and molecules. Does robustness reasoning work, alternatively, with observables? Take a simple case where someone is holding pencil and asks different observers what they see. Suppose Observer 1 sees the pencil in clear view and asserts that it is a pencil. Should Observer 1’s confidence that he is really seeing a pencil be increased by Observer 2’s assent that it is a pencil as well? Not at all. Should Observer 2 deny that it is a pencil, and assert instead that it is a rabbit, Observer 1 would just dismiss Observer 2’s observation as delusional. That is, Observer 2’s concurrence on the matter, that what is observed is a pencil, is irrelevant. But what if Observer 2’s denial is more subtle, that what is seen is not a pencil but a pen that looks like a pencil? Then Observer 1 could take a closer look and either continue to affirm that it is a pencil, in which case Observer 2’s dissent is again dismissed as delusional, or change his view to it being a pen, as revealed by his actual inspection. In such a case, Observers 2’s concurrence again becomes irrelevant, given Observer 1’s better and more direct evidence. Or perhaps, even more subtly, it may be difficult to tell whether it is a pen or a (mechanical) pencil. Here, if Observer 1 is still confident in distinguishing between pens and (mechanical) pencils, Observer 2’s concurrence or lack thereof is once more irrelevant. But what if Observer 1 is less than confident about what he sees, perhaps because he is unable to have a second, closer look at the item? In this case, he may defer to Observer 2’s expertise on the matter, in which case Observer 2’s testimony becomes authoritative and Observer’s 1’s opinion becomes irrelevant. Finally, suppose both Observer 1 and 2 are uncertain about what they see. Would a convergence in opinion matter here? Not necessarily. The Observers may suspect that they are suffering from the same lack of information, or the same delusion. The point to be made here is that the bare convergence of opinion isn’t, ultimately, what matters in validating one’s beliefs about what one observes. What matters is which observational process is more reliable, or who is the more reliable observer. In other words, robustness reasoning, where we are considering observables, seems to be irrelevant as regards the accuracy of observation reports, to go along with its lack of informativeness when it comes to unobservables. We have said that with observables, there are ways to gain knowledge about them independently of robustness reasoning by relying on reliable processes or identifying reliable observers. What then about unobservables? How can we learn about them without the use of reliable observation, and without the option of robustness reasoning? This is the situation with Avogadro’s number. Though one cannot observe the value of N directly, one can with Perrin’s vertical distribution experiment using emulsions indirectly determine N from measurable parameters, such as the numbers of emulsive particles at various heights, the density of the liquid containing emulsive particles, the density of the emulsive particles themselves, and so on. The success of this approach to measuring N will depend on the reliability of Perrin’s experimental setup and the accuracy of the assumptions he makes in interpreting this setup. From here we are led to what now has the appearance of a robustness argument, one that compares the value of N as derived from the most reliable version of the vertical distribution experiment with the value of N as derived by diverse, other experimental strategies. It is an argument that is not entirely uninformative since each of these diverse strategies teaches us different things about molecules, more than they obey Avogadro’s hypothesis and achieve vertical densities as prescribed by Avogadro’s number. Let us look at two examples of how this new information is produced. First, Perrin is aware that Einstein, using Maxwell’s distribution law for molecular speeds and Stokes’ law, had previously formulated an equation relating the mean square displacement of emulsive, Brownian particles to the time elapsed, the absolute temperature, the radius of an emulsive particle, the viscosity of the fluid, the gas constant and N, Avogadro’s number. Call this ‘Einstein’s displacement equation’. By means of a suitable emulsion Perrin could calculate values for all the quantities except for N, and so could calculate N. Doing this revealed for Perrin a value for N consistent with the most accurate value of N generated using his vertical distribution experiments. With this convergence of values of N, first by means of examining vertical distributions of emulsive particles and secondly by measuring the displacements of these emulsive particles, we then appear to learn something informative about molecules, that not only do they satisfy Avogadro’s hypothesis and Avogadro’s number, but they also move in accordance with Einstein’s displacement equation. A second example makes reference to Lord Rayleigh’s explanation of the blueness of the sky. Rayleigh theorized that the blueness of the sky is a result of the diffraction of sunlight by the molecules in the upper atmosphere. To this end Rayleigh formulated an equation that accounts for this blueness, and for which all the variables are measurable except for N, Avogadro’s number. As a result, it was possible to collect spectro-photometric data to arrive at a value for N, using different sorts of data than those used by Perrin in his vertical distribution experiments. As such, using the spectro-photometric data generated by M. Sella on Monte Rosa, Perrin derived a value for N ‘comprised within 30 · 1022 and 150 · 1022’ (Perrin [1910], p. 79), and on the basis of a ‘long series of measurements [made] on Monte Rosa by M. Leon Brillouin’, calculates N to be 60 · 1022 (Perrin [1916], p. 142). These values for N converge on the preferred values generated by the vertical distribution experiments, so here again we have learned something informative about molecules, that they diffract sunlight in a way to cause the apparent blueness of the sky in accordance with Rayleigh’s theoretical model. Now, despite these examples having the appearance of components in a robustness argument, they are actually doing something quite different. Perrin does not view either method of calculating N, either using Einstein’s displacement equation or Rayleigh’s diffraction model, as epistemically on a par with his vertical distribution experiments. Rather, Perrin believes he has confirmed Einstein’s displacement equation since by applying this equation to emulsions, he generates the value N = 70 · 1022, which coheres with his preferred value for N produced in the vertical distribution experiments (consider, for example, the title of (Perrin [1910], Section 29): ‘Experimental confirmation of Einstein's theory’). Again, in his ([1916], p. 123), Perrin states that the ‘remarkable agreement [between the two values of N, one produced by the vertical distribution experiments and the other generated by an application of Einstein’s displacement equation] proves the rigorous accuracy of Einstein's formula’. Similarly with Lord Rayleigh’s explanation of the blueness of the sky, Perrin believes he has confirmed Rayleigh’s theory since the value for N found using Sella’s data is between 30 · 1022 and 150 · 1022, and so Perrin maintains, ‘in so far as the order of magnitude is concerned, [the] very interesting theory of Lord Rayleigh is verified’ ([1910], p. 79; Perrin arrives at a similar conclusion using Brillouin’s results—see [1916], p. 142). In fact, barring a few exceptions, Perrin arrives at similar judgements with all the diverse sorts of methods he describes for generating values for N: since they generate values for N that cohere with Perrin’s preferred value of N = 68.2 · 1022, Perrin concludes that the theoretical underpinnings to these diverse methods are thereby confirmed, and with that confirmation we have developed to some extent the theory of what a molecule is like. These other, theoretical approaches to calculating N are, as I shall say, ‘calibrated’ using Perrin’s preferred value for N. Of course, there is a more familiar use of the term ‘calibration’, referring to cases where a measuring instrument is ‘calibrated’ by reference to a standardized measuring instrument. The analogy is intentional: in the case we are considering, what are calibrated are theoretical assumptions, not measuring instruments, and here the calibration occurs by reference to an established theoretical assumption, in this case the vindicated kinetic model that leads to the preferred value for Avogadro’s number, N. It follows that all other methods of producing N should match this value for N, and when they do, their own theoretical assumptions are thereby confirmed. What this means is that the ‘robustness reasoning’ interpretation of Perrin’s method, where N is validated simply by having been generated using a variety of methods, is mistaken. But this leaves us with a difficult interpretive matter, for again as Perrin ([1916], pp. 206–7) explicitly states, Our wonder is aroused at the very remarkable agreement found between values derived from the consideration of such widely different phenomena. Seeing that not only is the same magnitude obtained by each method when the conditions under which it is applied are varied as much as possible, but that the numbers thus established also agree among themselves, without discrepancy, for all the methods employed, the real existence of the molecule is given a probability bordering on certainty. If the ‘reality’ of molecules is not being ‘given a probability bordering on certainty’ because of a robustness argument, what then is the source of this realist attitude? We examine this question in the next section. 4 Perrin’s Realism A key problem with the common cause argument, cited above, is that our understanding of the common cause may be quite minimal, perhaps nothing more than that it is ‘something’, that is, the common cause of a number of diverse phenomena. So if we insist on being realist about this common cause, the question arises, ‘What are we being realist about?’, and here we need some empirically confirmed elaborations on the nature of this cause. What Perrin notices is that armed with an experimental determination of N that is relatively authoritative—that is, the value of N arrived at through his vertical distribution experiments—and given that various, divergent theoretical perspectives lead to similar values for N, it follows that these differing theoretical perspectives are verified (or ‘calibrated’, as I put it). This ‘remarkable agreement’ then allows a researcher like Perrin to fill out the theory of what a molecule is like. As described in the confirmed, theoretical derivations of N that Perrin recounts, molecules are shown to have a variety of properties, and asserting the real existence of molecules—now more fully understood theoretical entities—becomes a meaningful option. To draw a simplistic comparison, suppose I tell you that I have something in my hand. Should you believe that it is a real thing, and that I am not lying? In one case—the robustness case—I introduce the testimony of a number of people from diverse vantage-points who separately and independently confirm that there is something in my hand. Should you now be a realist about this thing? The problem is that you have very little knowledge about what this thing is, and so have not much idea what to be realist about. Now suppose the independent testimonies of these individuals provide more. For example, one person says the object is circular and flat, another says it’s made of metal, another says it has the engraving ‘25 cents’ on it, and so on. The case for realism thus becomes stronger because now we have details about what this thing is. That is, we know what we are asked to be realist about, in all likelihood, a quarter. Whereas before a realist view of the object was practically vacuous, it is now substantive and informative. But of course whether this view of the object truly is substantive and informative depends on whether the independent reporters are reliable producers of information. Left to themselves to produce reports without any epistemic standards to meet, the reporters could leave us reconstructing what this object is on the basis of misinformation, and this would be true despite the surprising convergence of their reports, that is, despite their unanimous agreement that there is an object in my hand. So, how could one go about verifying the reliability of their reports? There are a variety of strategies here, but the main one is to identify one of these independent reporters as having an authoritative method of gaining information about the object in my hand. Suppose, for example, one of the reporters uses a metal detector and thus her report that the object is metal is strongly validated. With that validation you can test, or ‘calibrate’, the other reports. Do they all generate the information that the object is metal? Say, for the individual who says that the object is circular and flat, does she also generate the result that it is metal? If so, we have reason to trust her further judgement about the object, that it is circular and flat, and in this way the nature of the object is theoretically developed. Similarly for the person who says that the object is engraved with ‘25 cents’—her judgement is confirmed if, from her perspective, she’s also led to conclude that the object is metallic. With these accessorized forms of confirmation, where reporters have their claims about an object confirmed if they are able to correctly report on a standardized result, we arrive at a picture of this object replete with a set of confirmed properties. And, of course, this is what we know real objects to be—they’re not just abstract entities that ‘exist’, but come with a full set of properties. As such, a realist attitude about these objects becomes more legitimate the more ideas we have about their properties, and the more these ideas are confirmed. Let us now carry the analogy back to Perrin and molecules. Here the reliable reporter is the vertical distribution experiment, the most reliable version of this experiment that generates an estimate of N = 68.2 · 1022. The other reporters are the other methods of generating N. These other methods fill out the theory underlying the nature of molecules, and as their estimates of N are confirmed by agreeing with the authoritative vertical distribution experiment, it follows that the ways in which they have developed the theory of molecules are confirmed as well. In this way, Perrin fills out our theory of the molecules so that it can stand as substantive object for realist interpretation. This approach to understanding Perrin’s realist perspective on molecules is ironically anticipated by the anti-realist Bas van Fraassen. He comments: […] theory needs to be informative enough to make testing possible at all. Thus the extent to which we can have evidence that bears out a theory is a function of […] how logically strong and informative a theory is, sufficiently informative to design experiments that can test the different parts of the theory relative to assumptions that the theory applies to the experimental set-up. ([2009], p. 23) In being sufficiently informative in this way, we have, van Fraassen thinks, the resources to make ‘the kinetic theory into […] a truly empirical theory’ (p. 23). Of course, as is well known, van Fraassen cautions against adopting a realist attitude towards unobservables. One reason is that a realist attitude is epistemically risky given the inherent limitations on ampliative inferences (see van Fraassen [2007], p. 343). Additionally, van Fraassen maintains that a (scientific) realist attitude is largely unnecessary since, for him, empirical adequacy can do all the relevant epistemic work for scientists. Along these lines he cautions against reading Perrin as a realist about molecules, lest we attribute to Perrin an excessively optimistic assessment of his capabilities as an experimenter. Still van Fraassen sees merit in developing theories and in contemporaneously setting up empirical tests that assess these developments. A good theory extends the scope of the claims it considers to be empirically testable, and as the theory develops, so does the extent to which the theory is empirically testable. But these features of theoretical development can be problematic for a constructive empiricist. Consider, first of all, that scientists often develop their theories in a way that renders these developments untestable, such as when they make use of unobservables in their explanations. Sometimes these unobservables become empirically testable later on with advances in observational strategies. Pathogenic microorganisms, for example, were once postulated as disease carriers, and later became observed with advances in microscopy. But there was no guarantee that this would happen, and no such guarantee was needed for scientists to feel authorized in postulating these unobservables. Or it could happen that with improved observational technology, the postulated observables are shown not to exist or to exist otherwise than how they are conceived. Indeed, this may be expected if the scientific field is relatively new and there’s lots of room for innovative speculation. In either scenario, the prospective empirical adequacy of a developed theory is questionable, and scientists often suspect this to be the case—but that doesn’t stop them from making developments along these lines. To be sure, they aspire for empirical adequacy in the long run, and there is indication in van Fraassen that empirical adequacy is to be interpreted in just this way (see, for example, van Fraassen [2009], p. 7, Footnote 5). But the long run, in this sense, might by a very long run, and legitimate scientific theories might spend an extended period of time lacking substantive empirical support, and indeed may never gain much empirical support at all. If these meta-theoretical observations are true, there is a significant challenge facing the anti-realist in explaining why scientists engage in theoretical developments, since with these developments the goal of empirical adequacy is uncertain, and may even be unlikely where we have a novel science. It follows that the claim made by the constructive empiricist, that empirical adequacy is all scientists ultimately seek, does not capture the actual practice of science. On the other hand, the realist has a compelling picture for why scientists engage in empirical, scientific investigations that have a high probability of not issuing in empirically adequate theories. In short, they’re looking for the truth, and to achieve this they need to be speculative and imaginative with their theorizing, with the result that they often formulate theories that are empirically disconfirmed. If empirical adequacy were their goal, scientists would do well to be highly conservative with their theorizing. That they are liberal with their theorizing is a sign that empirical adequacy, at least in the short term, is not necessarily a priority for them. At this stage the van Fraassen-style anti-realist could respond that the transitory deviation of a theory from the standard of empirical adequacy is not incompatible, in the long run, with empirical adequacy: scientists might accept theories that are not successful empirically, temporarily, in the hopes that significant empirical grounding will come about later on. The anti-realist might then suggest that the sort of ‘filling out’ I have described as essential to a realist interpretation of an unobservable could be embraced by an anti-realist. The more ways of empirically grounding a theory the better, the anti-realist might point out, so in what way does the requirement of ‘filling out’ a theory improve the case for realism? The realist needs to go beyond van Fraassen at this point, and show that this ‘filling out’ involves something more than empirical grounding. But how? We have said that to be a realist about an unobservable one needs to conceptualize this unobservable as complemented with a set of confirmed properties. Again, we need to do this because that is what real objects are like: they come with a full set of properties, and a realist attitude about these objects becomes more legitimate the more confirmed ideas we have about their multiple properties. However, from an anti-realist perspective, it’s not clear that one would need to provide an elaborate, confirmed description of an unobservable since there’s no commitment on the anti-realist’s behalf to the real existence of an unobservable. For the anti-realist, a theory receives various forms of empirical confirmation, just as it does for a realist, but the reason for this elaboration of the theory needs to find its motivation elsewhere than in the ‘filling out’ of the properties of something—an unobservable—the scientist is committed in believing exists. The main candidate for this motivation is ostensibly to provide ‘better confirmation’ for the theory, and typically for the notion of better confirmation to make sense from an epistemological perspective it must amount to increasing the probability of the theory. Accordingly, the anti-realist scientist strives to show that a theory is well confimed by providing for the theory multiple forms of evidential support, thereby increasing the likelihood that the theory is true. Still he stops short of saying, ‘the theory is now shown to be true’, since with van Fraassen we have an enduring scepticism about the power of inferences that go beyond observables. By contrast, the ‘traditional realist’—such as Psillos, Achinstein, or Chalmers—is more optimistic about the power of induction: for realists such as Psillos, Achinstein, and Chalmers, there comes a time when the evidence is strong enough, and a confirmed theory likely enough to be true, that a realist attitude about unobservables is legitimate. This is the usual dynamic of the debate between traditional realists and anti-realists, and as I hope to show in the next section, it is a debate that has no natural resolution. We need, I argue, a different approach to understanding realism along the lines I have suggested above, where realism is warranted given that an unobservable has a confirmed, filled-out set of properties. This, I submit, is an approach we find in Perrin’s reasoning on behalf of the reality of atoms and molecules. 5 Psillos, Achinstein, Chalmers, and van Fraassen on Understanding Perrin We have suggested as an alternative interpretation of Perrin’s work that Perrin’s aim in citing multiple empirical determinations of the value of N was to simply extend the theory of the atom, not to provide additional confirmation for the standardized value of N generated by the preferred version of the vertical distribution experiment. In other words, if an alternate, theoretically informed empirical methodology generates the standardized value for N, it follows that the theory underlying this methodology is confirmed and the theory of the atom is thereby filled out. Along these lines we have Perrin’s ([1916], p. 10) ‘surprising agreement’ that he considers so valuable in establishing a realist interpretation of molecules, where molecules are understood as combinations of atoms indivisible by ‘the various chemical and physical transformations that we are able to bring about’. It is the agreement we find between a diverse set of theoretical approaches to determining N that vary in degrees of reliability, and a theoretical approach that standardizes the value of N and ‘calibrates’ these other approaches. In this way Perrin gives us the best of both worlds. He furthers the theory of the atom by connecting this theory to a more filled-out and interconnected picture of the microphysical world, thus providing atoms and molecules with an elaborated theoretical description. Moreover, these cognate descriptions are confirmed since they generate the ‘correct’ value for N. By this means the theory of atoms and molecules, in Perrin’s hands, becomes an improved candidate for a realist interpretation in which atoms and molecules are shown to have diverse and extended properties with the attribution of each property receiving empirical confirmation. As he says, somewhat optimistically, ‘the real existence of the molecule is given a probability bordering on certainty’. Compare now this story to the one provided by Psillos, Achinstein, and Chalmers. Both Psillos and Achinstein, as we have noted, redefine the atomic hypothesis to include a claim that N is approximately 60 × 1022, and as such the bulk of Perrin’s work in defending the atomic hypothesis is to show that 60 × 1022 is the empirically correct value for N. Here Psillos makes clear that the correctness of this value of N is ‘established by a series of experiments involving different methods and different domains’ ([2011b], p. 357; see also [2011a], p. 183 and [2014], p. 157)—he lists ‘the measurement of the coefficient of diffusion; the mobility of ions; the blue color of the sky (the diffraction of the sunlight by the atmospheric molecules); the charge of ions; radioactive bodies; and the infrared part of the spectrum of the black-body radiation’ ([2011b], p. 355; see also [2011a], p. 182 and [2014], p. 155). Analogously, Achinstein finds there to be an epistemically crucial convergence of values for N. ‘C ’ for Achinstein ([2001], p. 262) refers to the empirical result that the vertical distribution experiments generate a value for N = 60 × 1022, a result buttressed by the fact that this value for N is stable ‘when values for n′ and n [in Equation (3)] are varied’—that is, N is the same when calculated at different, vertical heights. According to Achinstein ([2001], p. 262), C is stronger, ‘potential’ evidence for the assertion that Avogadro’s number is 60 × 1022, the greater degree to which C ‘contains a larger, more varied, or more precisely described [empirical] sample’. For his part, Chalmers ([2011], p. 723) regards Perrin as having shown that the ‘density distribution [of particles in an emulsion] and their mean displacement and rotation were in accordance with the assumption that the particles were in random motion and obeyed Newton’s laws’. What this means for Chalmers ([2011] p. 724), given the assumption that ‘a system of Brownian particles differs from a gas only in degree, but not in kind’—that is, the particles ‘differ from molecules only insofar as they have a much greater mass’—is that by examining emulsions one can ascertain various properties of molecules. For example, in measuring the kinetic energy of Brownian particles, ‘Perrin had in effect measured the mean kinetic energy of the molecules of a gas [which] enabled him to derive a value for Avogadro’s number’ ([2011], p. 724). It follows that the vertical distribution experiments along with experiments inspired by Einstein’s theoretical equations regarding the displacement and rotation of Brownian particles give Perrin ‘three independent ways of measuring N’ that surprisingly converge ([2011], p. 725), a convergence Chalmers regards as an essential part of Perrin’s argument for the reality of molecules. Here Chalmers notes his concurrence with the views of van Fraassen, for whom ‘concordance’ forms a key part of Perrin’s proof of atomic reality (‘concordance’ is the term for robustness van Fraassen adopts from Hermann Weyl; see van Fraassen [2009], p. 11). The difference between Chalmers and van Fraassen on this point is simply how strong this proof is. Always the sceptic, van Fraassen doesn’t think it is strong enough to go further than empirical adequacy. He calls Weyl’s concordance requirement an ‘empirical constraint’, nothing more (van Fraassen [2009], p. 21). More on the realist side, Chalmers ([2011], p. 728) thinks Perrin has ‘provided powerful arguments from coincidence for the claim that matter is made up of molecules in ceaseless motion’ and that Perrin showed ‘that the kinetic theory gets things right to the extent that there can be no serious doubt that there are molecules whose motions are the cause of Brownian motion and the pressure of gases’. So with Psillos, Achinstein, Chalmers, and van Fraassen we have differing interpretations of the sense in which Perrin is arguing robustly for the accuracy of an experimentally derived value for N, in so far as they differ from one another regarding what the relevant differing and converging experimental strategies are (van Fraassen’s view here is somewhat unclear, but his interpretation is arguably closer to Psillos’). Additionally, Psillos and Achinstein have rightfully emphasized the epistemic priority of Perrin’s vertical distribution experiments. Unfortunately they confuse the issue by amplifying the atomic hypothesis to include a hypothesis about the value of Avogadro’s number. Chalmers, on the other hand, misinterprets the significance of the vertical distribution experiments by omitting the crucial analogy to the rarefaction of air at higher altitudes, and so he fails to recognize Perrin’s ([1916]) qualitative argument for the reality of molecules qua Brownian particles. Finally, Psillos, Achinstein, and Chalmers all differ from van Fraassen on whether Perrin’s work supports a realism about molecules. On the realism issue, I have already noted that the debate appears to centre on the strength of the evidence. For example, we saw that this is what separates Chalmers from van Fraassen. It also separates Psillos and Achinstein: Psillos argues that Achinstein’s approach, despite being probabilistic like his own, gives too weak a reading of Perrin’s conclusions. ‘On Achinstein's reconstruction’, Psillos ([2011a], p. 186) comments: […] all Perrin achieved was to show that the atomic hypothesis is more likely than not. This is not a mean feat, of course. But it is hardly sufficient to sway the balance in favor of the atomic hypothesis in the way it actually did. By contrast, Psillos thinks Perrin has, in practical terms, provided a crucial experiment in support of the atomic hypothesis, leading to the conclusion that the atomic hypothesis is ‘very probable’ ([2011a], p. 186). Is the probability high enough to ground the claim that we know atoms exist? This is an epistemological question for which there is no clear consensus. Claims can be extremely probable but still not be said to be known, as with the lottery paradox where a lottery ticket can have a one in a million chance of winning but still one won’t know that it will lose (otherwise, why buy the ticket?). Moreover, it is too strict, in response, to insist on a conception of knowledge that requires certainty, for then obviously a knowledge of atoms will be lacking. Alternatively, although not known, is the atomic hypothesis nevertheless probable enough for us to simply take it for granted as an assumption in our physical reasoning? Actually, it need not be very probable at all for us to so choose it. It might simply be the case that any alternative hypothesis is much less probable than the atomic hypothesis, or that we are in a situation where we need to formulate some hypothesis or other just to get a research program off the ground in the hopes of subsequently generating evidence for or against this hypothesis. Finally, sub-microscopic reality might just be that much different from what Perrin, or even we ever imagined—there might not be atoms, at all. In other words, Psillos’ optimism about the strength of Perrin’s evidence sounds overreaching. It follows—if the differences between Psillos, Achinstein, Chalmers, and van Fraassen are any guide—that the debate between realists and anti-realists, or between realists of differing kinds, on whether evidence is strong enough to support a claim about an unobservable, such as an atom or molecule, is a debate that can have no clear resolution. Intuitions irremediably vary on the issue of how probable a hypothesis has to be for one to properly be a realist about the unobservable entities referred to in this hypothesis. What this means is that we need to reformulate the debate between realist and anti-realists as turning elsewhere than on the matter of whether evidence for a scientific hypothesis is ‘strong enough’, probabilistically speaking—if, that is, we ever want to resolve the debate. 6 Conclusion The reformulation of the realist–anti-realist debate I am suggesting, as motivated by Perrin’s work, focuses on the extent to which a scientist can theoretically develop the picture of an unobservable in a way that ensures the empirical confirmation of these theoretical developments. Clearly, the more theoretical developments there are and the more these developments are empirically confirmed, the better. But how many developments and how much confirmation is needed to sanction a realist attitude towards the entities so described? There is no natural limit here on how much theoretical development is appropriate in adequately comprehending the nature of an unobservable, or an observable for that matter. Objects and their properties may be infinitely rich for exploration. Moreover, there is no natural limit on how much confirmation is needed as regards these theoretical developments. Sometimes theories are not nearly confirmed enough, even though scientists feel comfortable in being realists about the entities referred to by these theories. So when is a realist attitude appropriate? For some subject matter, suppose an unobservable is postulated that explains a set of observable phenomena, and that this unobservable achieves a level of confirmed, theoretical development that outstrips any competing, postulated unobservable. It follows that a fallibilist, realist interpretation is warranted about this entity. This is the sort of situation Perrin finds himself in. The atomic hypothesis explains the rarefaction of a gas at increasing vertical heights, similar to how one explains the decreasing density of a uniform emulsion at higher elevations. A key part of this explanation is the demonstration by analogy that molecules exhibit Brownian motion, just as emulsive granules do. From here Perrin derives a standardized value for Avogadro’s number that assists in extending the theory of the atom, extensions that are confirmed by calibration with this standardized value. Comparatively, one finds the alternative to the atomic hypothesis—the thesis of the fundamental continuity of nature—to have practically no confirmed theoretical extensions. Indeed, given the continuity hypothesis, the rarefaction of gases at higher altitudes and the generated value for Avogadro’s number are inexplicable. It follows, for Perrin and his contemporaries, that a realism about atoms and molecules is an immediate and unavoidable conclusion, as there is no competing theory that can match the atomic hypothesis’ wealth of empirically confirmed, descriptive detail. I submit a similar strategy awaits any scientist who aspires to provide a realist interpretation for a favoured theoretical picture containing unobservables. Acknowledgements An early version of this article was presented at the 2016 annual meeting of the Canadian Society for the History and Philosophy of Science held at the University of Calgary, at which time I received valuable feedback. I also thank two anonymous reviewers for their extensive and detailed comments. References Achinstein P. [ 2001 ]: The Book of Evidence , New York : Oxford University Press . Chalmers A. [ 2011 ]: ‘Drawing Philosophical Lessons from Perrin’s Experiments on Brownian Motion: A Response to van Fraassen ’, British Journal for the Philosophy of Science , 62 , pp. 711 – 32 . Gouy L.-G. [ 1888 ]: ‘Note Sur le Mouvement Brownien ’, Journal de Physique Théorique et Appliquée , 7 , pp. 561 – 64 . Perrin J. [ 1910 ]: Brownian Movement and Molecular Reality , London : Taylor and Francis . Perrin J. [ 1916 ]: Atoms , New York : D. Van Nostrand . Perrin J. [ 1926 ]: ‘Discontinuous Structure of Matter’, in Nobel Lectures: Physics 1922–1941 , New York : Elsevier , pp. 138 – 64 . Poincaré H. [ 1913 ]: Mathematics and Science: Last Essays , New York : Dover . Psillos S. [ 2011a ]: ‘Making Contact with Molecules: On Perrin and Achinstein’, in Morgan G. J. (ed.), Philosophy of Science Matters: The Philosophy of Peter Achinstein , Oxford : Oxford University Press , pp. 177 – 90 . Psillos S. [ 2011b ]: ‘Moving Molecules above the Scientific Horizon: On Perrin’s Case for Realism ’, Journal for General Philosophy of Science , 42 , pp. 339 – 63 . Psillos S. [ 2014 ]: ‘The View from Within and the View from Above: Looking at van Fraassen’s Perrin’, in Gonzalez W. J. (ed.), Bas van Fraassen’s Approach to Representation and Models in Science , Dordrecht : Springer , pp. 143 – 66 . Salmon W. [ 1984 ]: Scientific Explanation and the Causal Structure of the World , Princeton, NJ : Princeton University Press . van Fraassen B. [ 2007 ]: ‘From a View of Science to a New Empiricism’, in Monton B. (ed.), Images of Empiricism: Essays on Science and Stances , Oxford : Oxford University Press , pp. 337 – 83 . van Fraassen B. [ 2009 ]: ‘The Perils of Perrin, in the Hands of Philosophers ’, Philosophical Studies , 143 , pp. 5 – 24 . © The Author(s) 2017. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The British Journal for the Philosophy of Science Oxford University Press

The Reality of Jean Perrin's Atoms and Molecules

Loading next page...
 
/lp/ou_press/the-reality-of-jean-perrin-s-atoms-and-molecules-3eoV98Io8A
Publisher
Oxford University Press
Copyright
© The Author(s) 2017. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com
ISSN
0007-0882
eISSN
1464-3537
D.O.I.
10.1093/bjps/axx054
Publisher site
See Article on Publisher Site

Abstract

Abstract Jean Perrin’s proof in the early-twentieth century of the reality of atoms and molecules is often taken as an exemplary form of robustness reasoning, where an empirical result receives validation if it is generated using multiple experimental approaches. In this article, I describe in detail Perrin’s style of reasoning, and locate both qualitative and quantitative forms of argumentation. Particularly, I argue that his quantitative style of reasoning has mistakenly been viewed as a form of robustness reasoning, whereas I believe it is something different, what I call ‘calibration’. From this perspective, I re-evaluate recent interpretations of Perrin provided by Stathis Psillos, Peter Achinstein, Alan Chalmers, and Bas van Fraassen, all of whom read Perrin as a robustness reasoner, though not necessarily in the same sort of way. I then argue that by viewing Perrin as a ‘calibration’ reasoner we gain a better understanding of why he believes himself to have established the reality of atoms and molecules. To conclude, I provide an alternative and more productive understanding of the basis of the dispute between realists and anti-realists. 1 Introduction 2 Perrin’s Reasoning: The Qualitative Argument 3 Perrin’s Reasoning: The Quantitative Argument 4 Perrin’s Realism 5 Psillos, Achinstein, Chalmers, and van Fraassen on Understanding Perrin 6 Conclusion 1 Introduction Scientific realism is the view that we are justified in believing that the unobservables referred to in our best scientific theories are real. Scientific anti-realism is the view that we are not justified in believing that the unobservables referred to in our best scientific theories are real, but rather only in believing that the postulation of unobservables in these theories is empirically supported. In the philosophy of science a great deal of effort is spent identifying empirical or theoretical strategies that could vindicate a realist attitude towards unobservables. Sometimes it is said that novel predictive success is a sign that a theory referring to unobservables should be interpreted realistically; sometimes the ability of postulated unobservables to provide the ‘best explanation’ for an empirical phenomenon indicates that these unobservables are real. In this article, the realist strategy we are considering is one where empirical results indicate the reality of unobservables if these results are generated by a variety of empirical methodologies. This is the strategy of ‘robustness reasoning’. Does successful robust reasoning provide a strong argument for the reality of an unobservable? This is a large question we cannot fully address here. But there is a famous case in the history of science where robustness reasoning was apparently critical to a scientist’s judgement that he had demonstrated the reality of unobservables: the case of Jean Perrin’s reasoning on behalf of the reality of atoms and molecules. What Perrin did was find the same value for Avogadro’s number using a variety of experimental sources. These sources dealt with such diverse phenomena as the viscosity of gases, Brownian motion, critical opalescence, blackbody radiation, radioactivity, and other things. In reflecting on this convergence of results, Perrin ([1916], pp. 206–7) comments: Our wonder is aroused at the very remarkable agreement found between values derived from the consideration of such widely different phenomena. Seeing that not only is the same magnitude obtained by each method when the conditions under which it is applied are varied as much as possible, but that the numbers thus established also agree among themselves, without discrepancy, for all the methods employed, the real existence of the molecule is given a probability bordering on certainty. For many philosophers and historians of science, this passage is representative of an exemplary form of robustness reasoning that was pivotal early in the twentieth century at establishing beyond doubt the reality of molecules. The views expressed by Henri Poincaré in this regard are typical: The brilliant determinations of the number of atoms computed by Mr. Perrin have completed the triumph of atomism. What makes it all the more convincing are the multiple correspondences between results obtained by entirely different processes […] The atom of the chemist is now a reality. ([1913], p. 90; quoted in Psillos [2011b], p. 355) In this article we attempt to gain some clarity about the nature of Perrin’s reasoning in his Brownian Movement and Molecular Reality ([1910]) and Atoms ([1916]) in which he is led to the conclusion that molecules exist. There are indications in Peter Achinstein ([2001]) and Stathis Psillos ([2011a], [2011b], [2014]) that Perrin is engaged in a different form of reasoning, though neither philosopher is able to fully detach himself from the view that Perrin is reasoning robustly. By comparison Alan Chalmers ([2011]) fully endorses the sort of robustness interpretation suggested by Poincaré. Drawing on insights presented by Achinstein and Psillos, I argue that Perrin’s form of argument is, to begin with, qualitative. In essence, Perrin creates a model of a gas using an emulsion containing small particles exhibiting Brownian motion. If this model is roughly accurate, then just as the density of a gas diminishes exponentially at higher altitudes, so should the density of an emulsion diminish exponentially—which is what we find. Hence, by this qualitative comparison of vertical distributions in a gas and in an emulsion, Perrin concludes that molecules in a gas are real and exhibit Brownian motion just as emulsive granules exist and exhibit Brownian motion. Perrin also uses a quantitative form of argument that focuses on the specific value of N, Avogadro’s number, and what an accurate determination of this number means for the claim that atoms and molecules are real. Psillos and Achinstein interpret this argument as a robustness argument on behalf of an enhanced atomic hypothesis, one that incorporates explicitly a statement of the value of Avogadro’s number. However, as I argue, Perrin’s quantitative argument should not be viewed as a robustness argument but rather as involving a different form of reasoning that I call ‘calibration’, the use of which I explain. Subsequent to this review of the form of Perrin’s reasoning, I turn to the question whether Perrin’s work, alternately construed as I have suggested, provides a basis for realism about atoms and molecules. I argue that it does, in contrast with Bas van Fraassen’s ([2009], p. 23) contention that Perrin’s achievement can be understood as simply providing ‘empirical grounding for all [the] significant parameters’ of a kinetic theory of atoms, and not necessarily as leading to a realist attitude about molecules. Here, my argument for realism about atoms and molecules, as based on Perrin’s work, is distinguishable from the sort of realist approaches advanced by Psillos, Achinstein, and Chalmers. For each of these three, realism about molecules (à la Perrin) is justified on the grounds that Perrin’s experimental strategies are strong enough or compelling enough to provide evidence that compels a belief in the reality of molecules. As I see it, such a ‘strong enough’ argument for realism is ultimately ineffective since anti-realists such as van Fraassen can always legitimately ask for more proof when dealing with unobservables, given the inherent limitations on inductive, ampliative reasoning. By contrast, my argument for realism de-emphasizes the issue of how strong evidence for a theoretical claim needs to be, where strength is interpreted in probabilistic terms. It looks instead to the prospects for a theoretical development of one’s understanding of an unobservable. In short, to be a realist about an unobservable one must have a suitably detailed (and of course, empirically supported) theoretical picture of this entity, a picture that is substantive enough to give one a clear idea about the content of one’s realist convictions. In this sense, Perrin’s contribution was to provide empirical grounding for a detailed theoretical picture of the properties of molecules. As I argue, this detail made a conviction in molecules legitimate, and in the absence of a similarly detailed and confirmed picture of the structure of the world as fundamentally continuous, a realist interpretation of atoms and molecules, one where nature is essentially discontinuous, acquires ‘a probability bordering on certainty’. 2 Perrin’s Reasoning: The Qualitative Argument In the quote above from Perrin’s Atoms, it is claimed that arriving at a univocal value for Avogadro’s number is connected to a belief in the reality of molecules, but the connection here is unclear. Avogadro’s hypothesis is the thesis that ‘equal volumes of different gases, under the same conditions of temperature and pressure, contain equal numbers of molecules’ (Perrin [1916], p. 18). The surprising aspect of Avogadro’s hypothesis is the implication that two gases containing the same number of molecules under similar conditions will exert the same pressure, regardless of the size or mass of the molecules making up the gas. From here we need to standardize the notion of the number of molecules in a gas. This leads to Avogadro’s number, abbreviated N, which for Perrin is the number of molecules in 32 g of oxygen (what Perrin calls a ‘gramme molecule’, or what we call a ‘mole’, of oxygen). By Avogadro’s hypothesis, a mole of any gas exerts the same pressure as a mole of any other gas, under similar temperatures and volumes. So, what’s the relevance of N to the truth or falsity of the claim that atoms and molecules exist? Perrin attempts a number of experimental determinations of N, as noted above. However, one particular method for Perrin stands out as the most reliable: microscopic observations of the vertical displacement of small particles suspended in a liquid (the resultant liquid is called an emulsion), where such particles exhibit Brownian motion. Why does Perrin emphasize this approach? Brownian motion is the continual, random motion of particles suspended in a fluid, analogous (though not identical) to viewing particles of dust suspended in sunlight. Brownian particles move irregularly, unconnected to one another, and during Perrin’s time it was well known that there are no external explanations for these movements, such as from external, environmental vibrations or thermal convection currents. It was hypothesized by Louis-Georges Gouy that Brownian motion could be caused by the irregular motions of the molecules in the fluid (see Perrin [1910], p. 5; referring to Louis-Georges Gouy [1888]). Overall, Perrin ([1910], p. 7) finds Gouy’s explanation convincing, but claims we need a more ‘direct’ proof of molecules: Instead of taking this hypothesis ready made and seeing how it renders account of the Brownian movement, it appears preferable to me to show that, possibly, it is logically suggested by this phenomenon alone, and this is what I propose to try. Again, he says, However seductive the hypothesis may be that finds the origin of the Brownian movement in the agitation of the molecules, it is nevertheless a hypothesis only […] I have attempted […] to subject the question to a definite experimental test that will enable us to verify the molecular hypothesis as a whole. ([1916], p. 88) What Perrin is suggesting is that an inference to the best explanation argument on behalf of the reality of molecules—however ‘seductive’—is not logically direct enough, and he proposes instead the following experimental strategy. To begin, he cites common knowledge that gas in a vertical column is more rarified higher in the column than lower down, and notes that the rate of rarefaction proceeds exponentially according the following equation. Following Perrin ([1916], p. 90), where p is the pressure of a gas at a lower elevation, and p′ is the pressure at a higher elevation, M is the mass of a gram molecule of the gas, g is the acceleration due to gravity, h is the difference in elevation, R is the gas constant, and T the absolute temperature, p′=p(1−((M·g·h)/RT)). (1) Rewriting Equation (1) in terms of the numbers of molecules at the two levels (n and n′), we have: n′=n(1−((N·m·g·h)/RT)), (2) where m is the mass of a molecule of gas and N is Avogadro’s number (from Achinstein [2001], p. 245; Perrin assumes, but never explicitly formulates Equation (2)). Perrin then sets up a model of an atmosphere using uniform emulsions (that is, where the suspended particles are uniform in size), one where the component particles are observed to exhibit Brownian motion. At this stage, Perrin makes a crucial assumption—we call it the ‘kinetic model’—to view the movements of the particulates in an emulsion as though they were large-sized molecules. He says, ‘It appeared to me at first intuitively, that the granules of such an emulsion should distribute themselves as a function of the height in the same manner as the molecules of a gas under the influence of gravity’ ([1910], p. 23), and, in a series of rhetorical questions, Is it not conceivable, therefore, that there may be no limit to the size of the atomic assemblages that obey [the gas] laws? Is it not conceivable that even visible particles might still obey them accurately, so that a granule agitated by the Brownian movement would count neither more nor less than an ordinary molecule with respect to the effect of its impact upon a partition that stops it? In short, is it impossible to suppose that the laws of perfect gases may be applicable even to emulsions composed of visible particles? ([1916], p. 89) On the basis of this model, Perrin ([1916], p. 89) identifies what he calls ‘crucial experiments that should provide a solid experimental basis from which to attack or defend the Kinetic Theory’, the theory that gases are composed of molecules, and that these molecules exhibit Brownian motion. In one such crucial experiment, Perrin suggests that with an emulsion we can calculate theoretically using his kinetic model what the vertical distribution of the emulsive particles should be, analogously to how we can calculate the vertical distribution of molecules in a gas at differing heights. The resultant equation expressing this distribution is, where we are considering the numbers of particles n in an emulsion at a lower elevation as compared to the number n′ at a higher elevation, and where we take into account the buoyancy of the liquid constituting the emulsion by means of the factor (1 – d/D), with d standing for the density of the liquid and D the density of the emulsive particles, n′=n(1−(N·m·g·h(1−d/D)/RT)), (3) where m is the mass of each particle, assumed to be uniform in size, and N is (again) Avogadro’s number (Perrin [1916], p. 93). In general terms, what Equation (3) is telling us is that in emulsions—if the kinetic model is correct—we should observe at increased elevations an exponential decline in (osmotic) pressure, or in the numbers of granules, just as we find such an exponential decrease in pressure, or in the numbers of molecules, with gases. As Perrin ([1910], p. 41) says, ‘if our kinetic theory is exact, this [vertical] distribution [in the emulsion] will change from the time the preparation is left at rest, will attain a limiting state, and in this state the concentration will decrease in an exponential manner as a function of the height’. On the other hand, if we don’t find in emulsions an exponential decline in pressure or in the numbers of particles, then either the kinetic model is mistaken—that is, one should not treat granules in an emulsion as large-sized molecules—, or if the kinetic model is retained, and since the Brownian motion of emulsive particles does not support an exponential decline in the osmotic pressure of an emulsion, then neither does the Brownian motion of molecules support nor explain the exponential decline of gas pressure at higher altitudes. It is worth noting here that Perrin ([1910], p. 24) provides a different equation representing the state of an emulsion (Psillos [2011a], p. 180, [2011b], p. 353, [2014], p. 152, as well as Chalmers [2011]): where W is the mean granular energy, ϕ the volume of each granule, Δ the density of each granule, δ the density of the intergranular liquid, g the gravitational acceleration, and n and n0, respectively, the number of granules per unit volume at two vertical levels separated by height h, we have 2/3Wlog(n0/n)=ϕ(Δ−δ)gh. (4) This equation Perrin calls the ‘equation of distribution of the emulsion’, and of particular note is its utilization of the variable W, the mean kinetic energy of the emulsive particles. As with Equation (3), Equation (4) is telling us—again, if the kinetic model is correct—that at higher altitudes there should be a corresponding exponential decrease in the concentration (or number) of emulsive granules, an exponential decrease that, Perrin ([1910], p. 24, Footnote) says, ‘is a necessary consequence of the equipartition of energy’. He thus recommends not only that we check for an exponential decline in granular density, but also that we calculate the corresponding value of W at the different levels to see whether this value ‘is the same as that which has been approximately assigned to the molecular energy’ ([1910], p. 24), that is, in accordance with equipartition. Perrin’s argument in this context crucially relies on the principle of equipartition. If the principle of equipartition is true, then as Perrin ([1910], p. 19) points out the mean translational kinetic energy of the molecules of a gas or a liquid is entirely dependent on the temperature of the gas or liquid. But if the Brownian motion of granules is entirely due to the impact of the molecules of the surrounding emulsion, then by the principle of equipartition the mean translational kinetic energy of the granules should be the same as the mean translational kinetic energy of the molecules of the emulsion, indeed, of ‘any molecule at the same temperature’ ([1910], p. 44). So if one can measure the mean translational kinetic energy of the granules, and show that it is the same as the mean translational kinetic energy of the molecules of the emulsion, we have generated good evidence that Brownian motion is caused by the impact of the molecules of the emulsion since, if Brownian motion had some other cause, you would not expect a priori that the mean translational kinetic energy of the granules would match the mean translational kinetic energy of the molecules of the emulsion. (Here I am indebted to an anonymous referee who correctly notes the differences in the analytical approaches to the vertical distribution experiments taken by Perrin in his ([1910], [1916]).) This ‘equipartition’ approach to analysing the vertical distribution experiments leads to a different understanding of what crucial experiment Perrin sees himself providing. Rather than focusing on whether the vertical distribution of particles in an emulsion shows the same exponential shape as the vertical distribution of molecules in a gas, we instead determine whether the measured, mean translational kinetic energy of granules in an emulsion is the same as the mean, translational kinetic energy of ‘any molecule at the same temperature’ (Perrin [1910], p. 44). If they are the same, Perrin ([1910], p. 24) asserts that ‘the origins of the Brownian movement will be established’, and we can use our measurements with emulsive granules to provide a means to determine the various molecular magnitudes; conversely, if they are different, ‘the credit of the kinetic [theory] will be weakened and the origin of the Brownian movement remains undiscovered’ ([1910], p. 21). As it turns out, after completing his experiments with emulsions, Perrin finds that the vertical distributions with emulsions do exhibit an exponential decline, and that the translational kinetic energies of the granules and molecules are the same, thus upholding the validity of the kinetic model, to view emulsive granules as large-sized molecules. From here, the nature of Perrin’s reasoning is initially qualitative: he finds the vertical distribution of particles in an emulsion to exhibit an exponential distribution that matches in shape the exponential distribution law found with gases. ‘This [shape] is just what experiment verifies’, he comments ([1910], p. 41)—an ‘exponential diminution is manifest’ ([1910], p. 44; see also [1916], p. 103). But he also has a quantitative result. Having confirmed the kinetic model, Perrin ([1910], p. 44) suggests that Equation (4) provides us with a ‘well-defined value of the granular energy W’, one that does ‘not depend on the emulsion chosen, and will be equal to the mean energy w of any molecule at the same temperature’, as required by the equipartition of energy. ‘Or, what comes to the same’, he suggests, where R is the gas constant and T is the absolute temperature, ‘the value N′ of the expression 3RT/2 W will be independent of the radius and density of the granules studied and will be equal to the expression 3RT/2w, that is, […] to Avogadro's constant N’ ([1910], p. 44). Perrin arrives at a similar conclusion in his ([1916]), using instead Equation (3): since with Equation (3) (but not with Equation (2)) we can experimentally measure all the variables, except for Avogadro’s number N, this allows us to calculate N. For these calculations, Perrin varies somewhat the conditions of the experiment, employing emulsions with (i) different sizes of emulsive grains (from 0.14 to 6 microns), (ii) different intergranular liquids (water, sugary water, glycerol, and water), (iii) different temperatures for the intergranular liquid (−9°C to 60°C), and (iv) different kinds of emulsive grains (gamboge and mastic) (see, for example, Perrin [1910], p. 45, [1916], pp. 103–4). With these varying methods, Perrin arrives at the following value for N ([1910], p. 46, [1916], p. 105, respectively): N=70·1022,or65·1022<N<72·1022 What does Perrin infer from these results? Here, his analysis in ([1910]) diverges from the analysis he gives in ([1916]). In the former he emphasizes the agreement he finds between W (the granular energy) and the corresponding value for w (the molecular energy) at the same temperature. He comments: ‘I do not think this agreement can leave any doubt as to the origin of the Brownian movement’ ([1910], p. 46). By comparison, in his ([1916]), he emphasizes the agreement between the value he calculates for N and the corresponding estimate for N he calculates based on the use of van der Waals’ equation in representing the viscosity of gases, generating a value for N = 62 · 1022 (see Perrin [1916], p. 105). ‘Such decisive agreement’, he says, ‘can leave no doubt as to the origin of the Brownian movement’ ([1916], p. 105). From here, Perrin argues in similar fashion in both ([1910]) and ([1916]) in a key passage effectively highlighted by Psillos ([2011a], pp. 181–2, [2011b], p. 355, [2014], pp. 154–5). Here is the passage, taken from (Perrin [1910], p. 46); a practically identical passage occurs in (Perrin [1916], p. 105): To understand how striking this [agreement] is, it is necessary to reflect that, before this experiment, no one would have dared to assert that the fall of concentration would not be negligible in the minute height of some microns, or that, on the contrary, no one would have dared to assert that all the granules would not finally arrive at the immediate vicinity of the bottom of the vessel. The first hypothesis would lead to the value zero for [N] while the second would lead to the value infinity. That, in the immense interval which a priori seems possible for [N], the number found should fall precisely on a value so near to the value predicted, certainly cannot be considered as the result of chance. This passage contains two arguments, the first drawing upon the qualitative fact noted above, and the second focusing on the quantitative measurement of N. In more detail, the qualitative argument is as follows: With an emulsion, Perrin thinks we would expect, a priori, either that the granules would be evenly distributed throughout fluid (corresponding to N = 0) or that they would all sink to the bottom (corresponding to N = ∞). Both of these options correspond to the hypothesis of continuity since if N = 0, there are no atoms, and if N = ∞, a collection of atoms is indistinguishable from an infinitely divisible fluid. But what we find empirically is that 0 < N < ∞, in other words, that the hypothesis of discontinuity is true—there are discrete atoms whose size is ascertained once we have calculated the value of N. Psillos ([2014], p. 155) recognizes the importance of this qualitative argument: ‘the exponential distribution of the Brownian particles is enough to discredit the hypothesis that matter is continuous’. But note further, as Psillos indicates, the particles in the emulsion exhibit Brownian motion, and if we are granting the kinetic model, this means that the molecules in a gas exhibit Brownian motion as well, producing an exponential distribution. Thus, we are able to explain the Brownian motion of microscopic particles as being due to the (Brownian) movements of molecules, if the analogy between emulsions and gases holds up. We have, as Perrin promised, a more direct proof of the Brownian motion of molecules than the one offered by Léon Gouy, where the hypothesis that molecules exhibit Brownian motion is not much more than an explanatory speculation. At this stage, we might reflect on how far Perrin thinks he has gotten in providing a molecular explanation of Brownian movement. At the very least, the molecular theory of Brownian movement identifies the motion of molecules as the cause of the Brownian movement of microscopic particles. But arguably Perrin shows more than this—that molecules themselves exhibit Brownian movement. If the kinetic model is true—if, as Perrin believes, we should view the granules in emulsions as though they are themselves large-sized molecules—then it follows that molecules themselves exhibit Brownian movement, as the granules are observed to do. Moreover, we have a better, causal explanation of the observed Brownian movement of micro-particles, if molecules themselves exhibit Brownian movement. There is textual evidence that Perrin does go the extra step of ascribing Brownian movement to molecules. For instance, as he comments in his ([1910], p. 46), ‘Brownian movement offers us, on a different scale, [a] faithful picture of the movements possessed, for example, by the molecules of oxygen dissolved in the water of a lake’. Also, after noting with his experiments that ‘molecular movement has not [strictly speaking] been made visible’—that is, granules in emulsions are viewed only as though they are themselves large-sized molecules—Perrin ([1916], p. 105) nevertheless maintains that ‘the Brownian movement is a faithful reflection of [molecular movement], or, better, it is a molecular movement in itself, in the same sense that the infra-red is still light’. In other words, infrared light, though invisible, shares the same properties as visible light, just as molecules, though invisible, share the same properties as visible granules (that is, they exhibit Brownian movement). Overall, by demonstrating that molecules exhibit Brownian movement, as modelled by granules in a uniform emulsion, Perrin believes himself to have provided a more ‘direct’ proof of molecules than what had been suggested by Gouy’s explanatory approach. 3 Perrin’s Reasoning: The Quantitative Argument The quantitative argument, on the other hand, involves taking into account the values for N we saw Perrin calculating above by reference to vertical distributions of granules in emulsions—again, N = 70 · 1022 ([1910], p. 46), or 65 · 1022 < N < 72 · 1022 ([1916], p. 105)—and comparing these values to 62 · 1022, the value of N generated using van der Waals’ equation (see Perrin [1910], p. 18, [1916], p. 105). Recall that Perrin remarks on the surprising convergence of these values. But the weight he puts on the value of N as derived with van der Waals’ equation is somewhat surprising since applying this equation has for him an element of uncertainty. Whereas with the vertical distribution experiments ‘the mean departure [(that is, error)] does not exceed 15%’, he notes that ‘the number given by the equation of Van der Waals does not allow of this degree of accuracy’ (Perrin [1910], p. 46). Indeed, Perrin works extensively on generating values of N using his vertical distribution experiments that he regards as the most accurate: these values are, from (Perrin [1910], p. 48), using gamboge grains with a radius of 0.212 microns, N = 70.5 · 1022, and from (Perrin [1916], p. 107), using (presumably) gamboge grains with a radius of 0.367 microns, N = 68.2 · 1022. This latter number he regards as the most accurate, better than his best ([1910]) number. Thus, the vertical distribution experiments not only serve to establish the reality of molecules by means of a qualitative argument. They also provide a means to generate the most accurate value for N. How do we know that, for Perrin, the vertical distribution experiments provide the most accurate values for N? First of all, as we shall see, these values for N are subsequently cited by Perrin to ‘confirm’ or ‘verify’ the validity of other approaches that generate a value for N utilizing different areas of physical theory, such as the displacement of Brownian particles, the tendency of molecules to diffract sunlight in the upper atmosphere, the capacity of a molecules to carry a charge (qua ion) during the ionization of a gas, and so on. But perhaps more directly, Perrin is clear in his ([1910]) and ([1916]) that these values are preferred. In his ([1910], p. 47), the value for N of 70.5 · 1022 is cited under the section heading, ‘Precise determination of Avogadro’s constant’. In his ([1916], p. 107), the value of N = 68.2 · 1022 is cited under the section heading, ‘Exact determination of the Molecular Magnitudes’, where Perrin comments, ‘by studying emulsions [and generating the value of 68.2 · 1022 for N] we are really able to weigh the atoms and not merely to estimate their weights approximately’. But does using the vertical distribution experiments to derive the preferred value for N create the basis for an improved argument for the reality of molecules? This is, once more, the question we posed at the beginning of the article concerning the relevance of the value of N to the truth or falsity of the claim that atoms and molecules exist. Psillos and Achinstein both think that the most accurate value of N Perrin arrives at—68.2 · 1022—serves to support the atomic hypothesis. How does this work? Psillos and Achinstein define the ‘atomic hypothesis’ in a similar, idiosyncratic way. According to Achinstein ([2001], p. 257), the ‘atomic hypothesis’ states that ‘chemical substances are composed of molecules, the number N of which in a gram molecular weight of any substance is (approximately) 6 × 1023. Psillos ([2011b], p. 187) suggests parenthetically that his definition of the atomic hypothesis is ‘roughly equivalent to Achinstein’s’, and indeed his Bayesian calculations are incomprehensible unless atomism is defined along these lines. Clearly, then, if we define the atomic hypothesis as not only requiring the fundamental discontinuity of matter, but as also including a specific value for N, it follows that generating any other value for N by means of experiment will disconfirm the atomic hypothesis. Conversely, generating a value for N cohering with this value—such as Perrin’s best, experimental value, N = 68.2 · 1022—confirms this atomic hypothesis, and does so uniquely. For Psillos, this is the sense in which Perrin believes himself to have subjected the atomic hypothesis to a crucial test and has made it ‘difficult to deny the objective reality of molecules’ ([1910], p. 89; similar comments are made in Perrin [1916], p. 105). Where Psillos differs from Achinstein is in emphasizing that Perrin has done more than just provide evidence for the atomic hypothesis. Rather, Perrin (for Psillos) has generated a definitive, successful test of the atomic hypothesis (as including a value for N), one that for Perrin’s contemporaries, such as Poincaré, was pivotal in establishing beyond doubt the reality of molecules. So with the quantitative argument we have an understanding of the relevance of N to the claim that molecules are real, as the atomic hypothesis (for Psillos and Achinstein) itself embeds an assumption about the value of N, and this hypothesis is confirmed if the available empirical data attest to this value of N. Moreover, we now see the relevance of robustness reasoning to the atomic hypothesis since, purportedly, it is Perrin’s strategy to validate his experimentally derived value for N by noting how this value is generated by means of a variety of experimental methods, methods dealing with the viscosity of gases, Brownian motion, critical opalescence, blackbody radiation, various radioactive phenomena, and so on. As Perrin ([1916], p. 207) says, because ‘the numbers thus established […] agree among themselves, without discrepancy, for all the methods employed, the real existence of the molecule is given a probability bordering on certainty’. So what is the status of this robustness argument? Does it validate Perrin’s experimentally generated value for N? Psillos in a footnote in his ([2011b]) makes an important observation that, in fact, casts doubt on the value of Perrin’s purported robustness argument for the accuracy of N, without apparently realizing that this observation undermines his own endorsement of the value of robustness reasoning. Wesley Salmon, Psillos notes, had interpreted Perrin as engaging in a form of common cause reasoning in which, to explain the convergence of values for N, one cites the reality of molecules as the common cause (see Salmon [1984]). But there is a problem Psillos thinks with interpreting Perrin in this way since most of the diverse ways of calculating N were known to Perrin prior to his own experiments with emulsions. Thus, such a common cause—that is, robustness reasoning—approach does not, Psillos ([2011b], p. 358) thinks, ‘adequately explain Perrin’s own contribution to the confirmation of the atomic hypothesis’, a contribution that apparently put our confidence in the reality of molecules ‘beyond doubt’. If all Perrin did was add a few more experimental methods that, yet again, converged on the same value for N, it’s not clear why the already numerous convergent methods were not enough, nor why further methods, beyond what Perrin provided, would not also be needed. At this stage, Psillos adds a second criticism of Salmon’s common cause argument for the reality of molecules, which also counts as a critique of robustness reasoning. When one notes the convergences of the values of N and explains this convergence by citing the reality of molecules, or more precisely, the reality of molecules in an amount prescribed by Avogadro’s number, one does not thereby necessarily say anything thereby about what molecules are nor what properties they have. As Psillos ([2011b], p. 358) says, ‘it is not clear in what sense exactly the molecules are the common cause of the agreement perceived in the various calculations of Avogadro’s number’. A variety of causes could be postulated as leading to this convergent set of values for N. N could simply stand as the number of very small, coherent, though divisible components of a substance, perhaps something like emulsive granules though at a sub-microscopic level. Here it would help if we could rely on some pre-experimental, truthful presupposition about what exactly N stands for. But, as van Fraassen ([2009], p. 23) points out, Perrin was working in the context of classical kinetic theory with, from our perspective, a somewhat impoverished view of atomic reality. Referencing Dalton, Perrin ([1916], p. 10) describes atoms as ‘a fixed species of particles, all absolutely identical; these particles pass, without ever becoming subdivided, through the various chemical and physical transformations that we are able to bring about, and [are] indivisible by means of such changes’. Perrin makes similar comments in his 1926 Nobel Lecture. Speaking about ‘the simple substance hydrogen [which] can be regained and which passes, disguised but indestructible, through our various reactions’, he says: Dalton supposed, and this is the essential point of the atomic hypothesis, that this substance is formed by a definite variety of particles which are all identical and which cannot be cut into pieces in the reactions which we can produce, and which for this reason are called atoms. (Perrin [1926], pp. 140–1) To be sure, Perrin has a more nuanced understanding of atoms in which atoms are identified as having internal parts: in his 1926 Nobel Lecture, he remarks that an atom has a discontinuous structure, modelled on a planetary system (Perrin [1926], p. 163). Thus his citation of a Daltonian perspective should not be over-emphasized. The key point to keep in mind, in any event, is that common cause, robustness reasoning, if not simply uninformative about the reality of the entity being argued for, could be positively misleading if it serves to vindicate the wayward, theoretical presuppositions of an experimenter. There is nothing in the sort of common cause, robustness argument that Salmon ascribes to Perrin that guards us against such a potential bias. The problem we are citing with robustness reasoning, that of not providing sufficient information about the nature or properties of inferred entities, is critical where we are dealing with unobservables, such as atoms and molecules. Does robustness reasoning work, alternatively, with observables? Take a simple case where someone is holding pencil and asks different observers what they see. Suppose Observer 1 sees the pencil in clear view and asserts that it is a pencil. Should Observer 1’s confidence that he is really seeing a pencil be increased by Observer 2’s assent that it is a pencil as well? Not at all. Should Observer 2 deny that it is a pencil, and assert instead that it is a rabbit, Observer 1 would just dismiss Observer 2’s observation as delusional. That is, Observer 2’s concurrence on the matter, that what is observed is a pencil, is irrelevant. But what if Observer 2’s denial is more subtle, that what is seen is not a pencil but a pen that looks like a pencil? Then Observer 1 could take a closer look and either continue to affirm that it is a pencil, in which case Observer 2’s dissent is again dismissed as delusional, or change his view to it being a pen, as revealed by his actual inspection. In such a case, Observers 2’s concurrence again becomes irrelevant, given Observer 1’s better and more direct evidence. Or perhaps, even more subtly, it may be difficult to tell whether it is a pen or a (mechanical) pencil. Here, if Observer 1 is still confident in distinguishing between pens and (mechanical) pencils, Observer 2’s concurrence or lack thereof is once more irrelevant. But what if Observer 1 is less than confident about what he sees, perhaps because he is unable to have a second, closer look at the item? In this case, he may defer to Observer 2’s expertise on the matter, in which case Observer 2’s testimony becomes authoritative and Observer’s 1’s opinion becomes irrelevant. Finally, suppose both Observer 1 and 2 are uncertain about what they see. Would a convergence in opinion matter here? Not necessarily. The Observers may suspect that they are suffering from the same lack of information, or the same delusion. The point to be made here is that the bare convergence of opinion isn’t, ultimately, what matters in validating one’s beliefs about what one observes. What matters is which observational process is more reliable, or who is the more reliable observer. In other words, robustness reasoning, where we are considering observables, seems to be irrelevant as regards the accuracy of observation reports, to go along with its lack of informativeness when it comes to unobservables. We have said that with observables, there are ways to gain knowledge about them independently of robustness reasoning by relying on reliable processes or identifying reliable observers. What then about unobservables? How can we learn about them without the use of reliable observation, and without the option of robustness reasoning? This is the situation with Avogadro’s number. Though one cannot observe the value of N directly, one can with Perrin’s vertical distribution experiment using emulsions indirectly determine N from measurable parameters, such as the numbers of emulsive particles at various heights, the density of the liquid containing emulsive particles, the density of the emulsive particles themselves, and so on. The success of this approach to measuring N will depend on the reliability of Perrin’s experimental setup and the accuracy of the assumptions he makes in interpreting this setup. From here we are led to what now has the appearance of a robustness argument, one that compares the value of N as derived from the most reliable version of the vertical distribution experiment with the value of N as derived by diverse, other experimental strategies. It is an argument that is not entirely uninformative since each of these diverse strategies teaches us different things about molecules, more than they obey Avogadro’s hypothesis and achieve vertical densities as prescribed by Avogadro’s number. Let us look at two examples of how this new information is produced. First, Perrin is aware that Einstein, using Maxwell’s distribution law for molecular speeds and Stokes’ law, had previously formulated an equation relating the mean square displacement of emulsive, Brownian particles to the time elapsed, the absolute temperature, the radius of an emulsive particle, the viscosity of the fluid, the gas constant and N, Avogadro’s number. Call this ‘Einstein’s displacement equation’. By means of a suitable emulsion Perrin could calculate values for all the quantities except for N, and so could calculate N. Doing this revealed for Perrin a value for N consistent with the most accurate value of N generated using his vertical distribution experiments. With this convergence of values of N, first by means of examining vertical distributions of emulsive particles and secondly by measuring the displacements of these emulsive particles, we then appear to learn something informative about molecules, that not only do they satisfy Avogadro’s hypothesis and Avogadro’s number, but they also move in accordance with Einstein’s displacement equation. A second example makes reference to Lord Rayleigh’s explanation of the blueness of the sky. Rayleigh theorized that the blueness of the sky is a result of the diffraction of sunlight by the molecules in the upper atmosphere. To this end Rayleigh formulated an equation that accounts for this blueness, and for which all the variables are measurable except for N, Avogadro’s number. As a result, it was possible to collect spectro-photometric data to arrive at a value for N, using different sorts of data than those used by Perrin in his vertical distribution experiments. As such, using the spectro-photometric data generated by M. Sella on Monte Rosa, Perrin derived a value for N ‘comprised within 30 · 1022 and 150 · 1022’ (Perrin [1910], p. 79), and on the basis of a ‘long series of measurements [made] on Monte Rosa by M. Leon Brillouin’, calculates N to be 60 · 1022 (Perrin [1916], p. 142). These values for N converge on the preferred values generated by the vertical distribution experiments, so here again we have learned something informative about molecules, that they diffract sunlight in a way to cause the apparent blueness of the sky in accordance with Rayleigh’s theoretical model. Now, despite these examples having the appearance of components in a robustness argument, they are actually doing something quite different. Perrin does not view either method of calculating N, either using Einstein’s displacement equation or Rayleigh’s diffraction model, as epistemically on a par with his vertical distribution experiments. Rather, Perrin believes he has confirmed Einstein’s displacement equation since by applying this equation to emulsions, he generates the value N = 70 · 1022, which coheres with his preferred value for N produced in the vertical distribution experiments (consider, for example, the title of (Perrin [1910], Section 29): ‘Experimental confirmation of Einstein's theory’). Again, in his ([1916], p. 123), Perrin states that the ‘remarkable agreement [between the two values of N, one produced by the vertical distribution experiments and the other generated by an application of Einstein’s displacement equation] proves the rigorous accuracy of Einstein's formula’. Similarly with Lord Rayleigh’s explanation of the blueness of the sky, Perrin believes he has confirmed Rayleigh’s theory since the value for N found using Sella’s data is between 30 · 1022 and 150 · 1022, and so Perrin maintains, ‘in so far as the order of magnitude is concerned, [the] very interesting theory of Lord Rayleigh is verified’ ([1910], p. 79; Perrin arrives at a similar conclusion using Brillouin’s results—see [1916], p. 142). In fact, barring a few exceptions, Perrin arrives at similar judgements with all the diverse sorts of methods he describes for generating values for N: since they generate values for N that cohere with Perrin’s preferred value of N = 68.2 · 1022, Perrin concludes that the theoretical underpinnings to these diverse methods are thereby confirmed, and with that confirmation we have developed to some extent the theory of what a molecule is like. These other, theoretical approaches to calculating N are, as I shall say, ‘calibrated’ using Perrin’s preferred value for N. Of course, there is a more familiar use of the term ‘calibration’, referring to cases where a measuring instrument is ‘calibrated’ by reference to a standardized measuring instrument. The analogy is intentional: in the case we are considering, what are calibrated are theoretical assumptions, not measuring instruments, and here the calibration occurs by reference to an established theoretical assumption, in this case the vindicated kinetic model that leads to the preferred value for Avogadro’s number, N. It follows that all other methods of producing N should match this value for N, and when they do, their own theoretical assumptions are thereby confirmed. What this means is that the ‘robustness reasoning’ interpretation of Perrin’s method, where N is validated simply by having been generated using a variety of methods, is mistaken. But this leaves us with a difficult interpretive matter, for again as Perrin ([1916], pp. 206–7) explicitly states, Our wonder is aroused at the very remarkable agreement found between values derived from the consideration of such widely different phenomena. Seeing that not only is the same magnitude obtained by each method when the conditions under which it is applied are varied as much as possible, but that the numbers thus established also agree among themselves, without discrepancy, for all the methods employed, the real existence of the molecule is given a probability bordering on certainty. If the ‘reality’ of molecules is not being ‘given a probability bordering on certainty’ because of a robustness argument, what then is the source of this realist attitude? We examine this question in the next section. 4 Perrin’s Realism A key problem with the common cause argument, cited above, is that our understanding of the common cause may be quite minimal, perhaps nothing more than that it is ‘something’, that is, the common cause of a number of diverse phenomena. So if we insist on being realist about this common cause, the question arises, ‘What are we being realist about?’, and here we need some empirically confirmed elaborations on the nature of this cause. What Perrin notices is that armed with an experimental determination of N that is relatively authoritative—that is, the value of N arrived at through his vertical distribution experiments—and given that various, divergent theoretical perspectives lead to similar values for N, it follows that these differing theoretical perspectives are verified (or ‘calibrated’, as I put it). This ‘remarkable agreement’ then allows a researcher like Perrin to fill out the theory of what a molecule is like. As described in the confirmed, theoretical derivations of N that Perrin recounts, molecules are shown to have a variety of properties, and asserting the real existence of molecules—now more fully understood theoretical entities—becomes a meaningful option. To draw a simplistic comparison, suppose I tell you that I have something in my hand. Should you believe that it is a real thing, and that I am not lying? In one case—the robustness case—I introduce the testimony of a number of people from diverse vantage-points who separately and independently confirm that there is something in my hand. Should you now be a realist about this thing? The problem is that you have very little knowledge about what this thing is, and so have not much idea what to be realist about. Now suppose the independent testimonies of these individuals provide more. For example, one person says the object is circular and flat, another says it’s made of metal, another says it has the engraving ‘25 cents’ on it, and so on. The case for realism thus becomes stronger because now we have details about what this thing is. That is, we know what we are asked to be realist about, in all likelihood, a quarter. Whereas before a realist view of the object was practically vacuous, it is now substantive and informative. But of course whether this view of the object truly is substantive and informative depends on whether the independent reporters are reliable producers of information. Left to themselves to produce reports without any epistemic standards to meet, the reporters could leave us reconstructing what this object is on the basis of misinformation, and this would be true despite the surprising convergence of their reports, that is, despite their unanimous agreement that there is an object in my hand. So, how could one go about verifying the reliability of their reports? There are a variety of strategies here, but the main one is to identify one of these independent reporters as having an authoritative method of gaining information about the object in my hand. Suppose, for example, one of the reporters uses a metal detector and thus her report that the object is metal is strongly validated. With that validation you can test, or ‘calibrate’, the other reports. Do they all generate the information that the object is metal? Say, for the individual who says that the object is circular and flat, does she also generate the result that it is metal? If so, we have reason to trust her further judgement about the object, that it is circular and flat, and in this way the nature of the object is theoretically developed. Similarly for the person who says that the object is engraved with ‘25 cents’—her judgement is confirmed if, from her perspective, she’s also led to conclude that the object is metallic. With these accessorized forms of confirmation, where reporters have their claims about an object confirmed if they are able to correctly report on a standardized result, we arrive at a picture of this object replete with a set of confirmed properties. And, of course, this is what we know real objects to be—they’re not just abstract entities that ‘exist’, but come with a full set of properties. As such, a realist attitude about these objects becomes more legitimate the more ideas we have about their properties, and the more these ideas are confirmed. Let us now carry the analogy back to Perrin and molecules. Here the reliable reporter is the vertical distribution experiment, the most reliable version of this experiment that generates an estimate of N = 68.2 · 1022. The other reporters are the other methods of generating N. These other methods fill out the theory underlying the nature of molecules, and as their estimates of N are confirmed by agreeing with the authoritative vertical distribution experiment, it follows that the ways in which they have developed the theory of molecules are confirmed as well. In this way, Perrin fills out our theory of the molecules so that it can stand as substantive object for realist interpretation. This approach to understanding Perrin’s realist perspective on molecules is ironically anticipated by the anti-realist Bas van Fraassen. He comments: […] theory needs to be informative enough to make testing possible at all. Thus the extent to which we can have evidence that bears out a theory is a function of […] how logically strong and informative a theory is, sufficiently informative to design experiments that can test the different parts of the theory relative to assumptions that the theory applies to the experimental set-up. ([2009], p. 23) In being sufficiently informative in this way, we have, van Fraassen thinks, the resources to make ‘the kinetic theory into […] a truly empirical theory’ (p. 23). Of course, as is well known, van Fraassen cautions against adopting a realist attitude towards unobservables. One reason is that a realist attitude is epistemically risky given the inherent limitations on ampliative inferences (see van Fraassen [2007], p. 343). Additionally, van Fraassen maintains that a (scientific) realist attitude is largely unnecessary since, for him, empirical adequacy can do all the relevant epistemic work for scientists. Along these lines he cautions against reading Perrin as a realist about molecules, lest we attribute to Perrin an excessively optimistic assessment of his capabilities as an experimenter. Still van Fraassen sees merit in developing theories and in contemporaneously setting up empirical tests that assess these developments. A good theory extends the scope of the claims it considers to be empirically testable, and as the theory develops, so does the extent to which the theory is empirically testable. But these features of theoretical development can be problematic for a constructive empiricist. Consider, first of all, that scientists often develop their theories in a way that renders these developments untestable, such as when they make use of unobservables in their explanations. Sometimes these unobservables become empirically testable later on with advances in observational strategies. Pathogenic microorganisms, for example, were once postulated as disease carriers, and later became observed with advances in microscopy. But there was no guarantee that this would happen, and no such guarantee was needed for scientists to feel authorized in postulating these unobservables. Or it could happen that with improved observational technology, the postulated observables are shown not to exist or to exist otherwise than how they are conceived. Indeed, this may be expected if the scientific field is relatively new and there’s lots of room for innovative speculation. In either scenario, the prospective empirical adequacy of a developed theory is questionable, and scientists often suspect this to be the case—but that doesn’t stop them from making developments along these lines. To be sure, they aspire for empirical adequacy in the long run, and there is indication in van Fraassen that empirical adequacy is to be interpreted in just this way (see, for example, van Fraassen [2009], p. 7, Footnote 5). But the long run, in this sense, might by a very long run, and legitimate scientific theories might spend an extended period of time lacking substantive empirical support, and indeed may never gain much empirical support at all. If these meta-theoretical observations are true, there is a significant challenge facing the anti-realist in explaining why scientists engage in theoretical developments, since with these developments the goal of empirical adequacy is uncertain, and may even be unlikely where we have a novel science. It follows that the claim made by the constructive empiricist, that empirical adequacy is all scientists ultimately seek, does not capture the actual practice of science. On the other hand, the realist has a compelling picture for why scientists engage in empirical, scientific investigations that have a high probability of not issuing in empirically adequate theories. In short, they’re looking for the truth, and to achieve this they need to be speculative and imaginative with their theorizing, with the result that they often formulate theories that are empirically disconfirmed. If empirical adequacy were their goal, scientists would do well to be highly conservative with their theorizing. That they are liberal with their theorizing is a sign that empirical adequacy, at least in the short term, is not necessarily a priority for them. At this stage the van Fraassen-style anti-realist could respond that the transitory deviation of a theory from the standard of empirical adequacy is not incompatible, in the long run, with empirical adequacy: scientists might accept theories that are not successful empirically, temporarily, in the hopes that significant empirical grounding will come about later on. The anti-realist might then suggest that the sort of ‘filling out’ I have described as essential to a realist interpretation of an unobservable could be embraced by an anti-realist. The more ways of empirically grounding a theory the better, the anti-realist might point out, so in what way does the requirement of ‘filling out’ a theory improve the case for realism? The realist needs to go beyond van Fraassen at this point, and show that this ‘filling out’ involves something more than empirical grounding. But how? We have said that to be a realist about an unobservable one needs to conceptualize this unobservable as complemented with a set of confirmed properties. Again, we need to do this because that is what real objects are like: they come with a full set of properties, and a realist attitude about these objects becomes more legitimate the more confirmed ideas we have about their multiple properties. However, from an anti-realist perspective, it’s not clear that one would need to provide an elaborate, confirmed description of an unobservable since there’s no commitment on the anti-realist’s behalf to the real existence of an unobservable. For the anti-realist, a theory receives various forms of empirical confirmation, just as it does for a realist, but the reason for this elaboration of the theory needs to find its motivation elsewhere than in the ‘filling out’ of the properties of something—an unobservable—the scientist is committed in believing exists. The main candidate for this motivation is ostensibly to provide ‘better confirmation’ for the theory, and typically for the notion of better confirmation to make sense from an epistemological perspective it must amount to increasing the probability of the theory. Accordingly, the anti-realist scientist strives to show that a theory is well confimed by providing for the theory multiple forms of evidential support, thereby increasing the likelihood that the theory is true. Still he stops short of saying, ‘the theory is now shown to be true’, since with van Fraassen we have an enduring scepticism about the power of inferences that go beyond observables. By contrast, the ‘traditional realist’—such as Psillos, Achinstein, or Chalmers—is more optimistic about the power of induction: for realists such as Psillos, Achinstein, and Chalmers, there comes a time when the evidence is strong enough, and a confirmed theory likely enough to be true, that a realist attitude about unobservables is legitimate. This is the usual dynamic of the debate between traditional realists and anti-realists, and as I hope to show in the next section, it is a debate that has no natural resolution. We need, I argue, a different approach to understanding realism along the lines I have suggested above, where realism is warranted given that an unobservable has a confirmed, filled-out set of properties. This, I submit, is an approach we find in Perrin’s reasoning on behalf of the reality of atoms and molecules. 5 Psillos, Achinstein, Chalmers, and van Fraassen on Understanding Perrin We have suggested as an alternative interpretation of Perrin’s work that Perrin’s aim in citing multiple empirical determinations of the value of N was to simply extend the theory of the atom, not to provide additional confirmation for the standardized value of N generated by the preferred version of the vertical distribution experiment. In other words, if an alternate, theoretically informed empirical methodology generates the standardized value for N, it follows that the theory underlying this methodology is confirmed and the theory of the atom is thereby filled out. Along these lines we have Perrin’s ([1916], p. 10) ‘surprising agreement’ that he considers so valuable in establishing a realist interpretation of molecules, where molecules are understood as combinations of atoms indivisible by ‘the various chemical and physical transformations that we are able to bring about’. It is the agreement we find between a diverse set of theoretical approaches to determining N that vary in degrees of reliability, and a theoretical approach that standardizes the value of N and ‘calibrates’ these other approaches. In this way Perrin gives us the best of both worlds. He furthers the theory of the atom by connecting this theory to a more filled-out and interconnected picture of the microphysical world, thus providing atoms and molecules with an elaborated theoretical description. Moreover, these cognate descriptions are confirmed since they generate the ‘correct’ value for N. By this means the theory of atoms and molecules, in Perrin’s hands, becomes an improved candidate for a realist interpretation in which atoms and molecules are shown to have diverse and extended properties with the attribution of each property receiving empirical confirmation. As he says, somewhat optimistically, ‘the real existence of the molecule is given a probability bordering on certainty’. Compare now this story to the one provided by Psillos, Achinstein, and Chalmers. Both Psillos and Achinstein, as we have noted, redefine the atomic hypothesis to include a claim that N is approximately 60 × 1022, and as such the bulk of Perrin’s work in defending the atomic hypothesis is to show that 60 × 1022 is the empirically correct value for N. Here Psillos makes clear that the correctness of this value of N is ‘established by a series of experiments involving different methods and different domains’ ([2011b], p. 357; see also [2011a], p. 183 and [2014], p. 157)—he lists ‘the measurement of the coefficient of diffusion; the mobility of ions; the blue color of the sky (the diffraction of the sunlight by the atmospheric molecules); the charge of ions; radioactive bodies; and the infrared part of the spectrum of the black-body radiation’ ([2011b], p. 355; see also [2011a], p. 182 and [2014], p. 155). Analogously, Achinstein finds there to be an epistemically crucial convergence of values for N. ‘C ’ for Achinstein ([2001], p. 262) refers to the empirical result that the vertical distribution experiments generate a value for N = 60 × 1022, a result buttressed by the fact that this value for N is stable ‘when values for n′ and n [in Equation (3)] are varied’—that is, N is the same when calculated at different, vertical heights. According to Achinstein ([2001], p. 262), C is stronger, ‘potential’ evidence for the assertion that Avogadro’s number is 60 × 1022, the greater degree to which C ‘contains a larger, more varied, or more precisely described [empirical] sample’. For his part, Chalmers ([2011], p. 723) regards Perrin as having shown that the ‘density distribution [of particles in an emulsion] and their mean displacement and rotation were in accordance with the assumption that the particles were in random motion and obeyed Newton’s laws’. What this means for Chalmers ([2011] p. 724), given the assumption that ‘a system of Brownian particles differs from a gas only in degree, but not in kind’—that is, the particles ‘differ from molecules only insofar as they have a much greater mass’—is that by examining emulsions one can ascertain various properties of molecules. For example, in measuring the kinetic energy of Brownian particles, ‘Perrin had in effect measured the mean kinetic energy of the molecules of a gas [which] enabled him to derive a value for Avogadro’s number’ ([2011], p. 724). It follows that the vertical distribution experiments along with experiments inspired by Einstein’s theoretical equations regarding the displacement and rotation of Brownian particles give Perrin ‘three independent ways of measuring N’ that surprisingly converge ([2011], p. 725), a convergence Chalmers regards as an essential part of Perrin’s argument for the reality of molecules. Here Chalmers notes his concurrence with the views of van Fraassen, for whom ‘concordance’ forms a key part of Perrin’s proof of atomic reality (‘concordance’ is the term for robustness van Fraassen adopts from Hermann Weyl; see van Fraassen [2009], p. 11). The difference between Chalmers and van Fraassen on this point is simply how strong this proof is. Always the sceptic, van Fraassen doesn’t think it is strong enough to go further than empirical adequacy. He calls Weyl’s concordance requirement an ‘empirical constraint’, nothing more (van Fraassen [2009], p. 21). More on the realist side, Chalmers ([2011], p. 728) thinks Perrin has ‘provided powerful arguments from coincidence for the claim that matter is made up of molecules in ceaseless motion’ and that Perrin showed ‘that the kinetic theory gets things right to the extent that there can be no serious doubt that there are molecules whose motions are the cause of Brownian motion and the pressure of gases’. So with Psillos, Achinstein, Chalmers, and van Fraassen we have differing interpretations of the sense in which Perrin is arguing robustly for the accuracy of an experimentally derived value for N, in so far as they differ from one another regarding what the relevant differing and converging experimental strategies are (van Fraassen’s view here is somewhat unclear, but his interpretation is arguably closer to Psillos’). Additionally, Psillos and Achinstein have rightfully emphasized the epistemic priority of Perrin’s vertical distribution experiments. Unfortunately they confuse the issue by amplifying the atomic hypothesis to include a hypothesis about the value of Avogadro’s number. Chalmers, on the other hand, misinterprets the significance of the vertical distribution experiments by omitting the crucial analogy to the rarefaction of air at higher altitudes, and so he fails to recognize Perrin’s ([1916]) qualitative argument for the reality of molecules qua Brownian particles. Finally, Psillos, Achinstein, and Chalmers all differ from van Fraassen on whether Perrin’s work supports a realism about molecules. On the realism issue, I have already noted that the debate appears to centre on the strength of the evidence. For example, we saw that this is what separates Chalmers from van Fraassen. It also separates Psillos and Achinstein: Psillos argues that Achinstein’s approach, despite being probabilistic like his own, gives too weak a reading of Perrin’s conclusions. ‘On Achinstein's reconstruction’, Psillos ([2011a], p. 186) comments: […] all Perrin achieved was to show that the atomic hypothesis is more likely than not. This is not a mean feat, of course. But it is hardly sufficient to sway the balance in favor of the atomic hypothesis in the way it actually did. By contrast, Psillos thinks Perrin has, in practical terms, provided a crucial experiment in support of the atomic hypothesis, leading to the conclusion that the atomic hypothesis is ‘very probable’ ([2011a], p. 186). Is the probability high enough to ground the claim that we know atoms exist? This is an epistemological question for which there is no clear consensus. Claims can be extremely probable but still not be said to be known, as with the lottery paradox where a lottery ticket can have a one in a million chance of winning but still one won’t know that it will lose (otherwise, why buy the ticket?). Moreover, it is too strict, in response, to insist on a conception of knowledge that requires certainty, for then obviously a knowledge of atoms will be lacking. Alternatively, although not known, is the atomic hypothesis nevertheless probable enough for us to simply take it for granted as an assumption in our physical reasoning? Actually, it need not be very probable at all for us to so choose it. It might simply be the case that any alternative hypothesis is much less probable than the atomic hypothesis, or that we are in a situation where we need to formulate some hypothesis or other just to get a research program off the ground in the hopes of subsequently generating evidence for or against this hypothesis. Finally, sub-microscopic reality might just be that much different from what Perrin, or even we ever imagined—there might not be atoms, at all. In other words, Psillos’ optimism about the strength of Perrin’s evidence sounds overreaching. It follows—if the differences between Psillos, Achinstein, Chalmers, and van Fraassen are any guide—that the debate between realists and anti-realists, or between realists of differing kinds, on whether evidence is strong enough to support a claim about an unobservable, such as an atom or molecule, is a debate that can have no clear resolution. Intuitions irremediably vary on the issue of how probable a hypothesis has to be for one to properly be a realist about the unobservable entities referred to in this hypothesis. What this means is that we need to reformulate the debate between realist and anti-realists as turning elsewhere than on the matter of whether evidence for a scientific hypothesis is ‘strong enough’, probabilistically speaking—if, that is, we ever want to resolve the debate. 6 Conclusion The reformulation of the realist–anti-realist debate I am suggesting, as motivated by Perrin’s work, focuses on the extent to which a scientist can theoretically develop the picture of an unobservable in a way that ensures the empirical confirmation of these theoretical developments. Clearly, the more theoretical developments there are and the more these developments are empirically confirmed, the better. But how many developments and how much confirmation is needed to sanction a realist attitude towards the entities so described? There is no natural limit here on how much theoretical development is appropriate in adequately comprehending the nature of an unobservable, or an observable for that matter. Objects and their properties may be infinitely rich for exploration. Moreover, there is no natural limit on how much confirmation is needed as regards these theoretical developments. Sometimes theories are not nearly confirmed enough, even though scientists feel comfortable in being realists about the entities referred to by these theories. So when is a realist attitude appropriate? For some subject matter, suppose an unobservable is postulated that explains a set of observable phenomena, and that this unobservable achieves a level of confirmed, theoretical development that outstrips any competing, postulated unobservable. It follows that a fallibilist, realist interpretation is warranted about this entity. This is the sort of situation Perrin finds himself in. The atomic hypothesis explains the rarefaction of a gas at increasing vertical heights, similar to how one explains the decreasing density of a uniform emulsion at higher elevations. A key part of this explanation is the demonstration by analogy that molecules exhibit Brownian motion, just as emulsive granules do. From here Perrin derives a standardized value for Avogadro’s number that assists in extending the theory of the atom, extensions that are confirmed by calibration with this standardized value. Comparatively, one finds the alternative to the atomic hypothesis—the thesis of the fundamental continuity of nature—to have practically no confirmed theoretical extensions. Indeed, given the continuity hypothesis, the rarefaction of gases at higher altitudes and the generated value for Avogadro’s number are inexplicable. It follows, for Perrin and his contemporaries, that a realism about atoms and molecules is an immediate and unavoidable conclusion, as there is no competing theory that can match the atomic hypothesis’ wealth of empirically confirmed, descriptive detail. I submit a similar strategy awaits any scientist who aspires to provide a realist interpretation for a favoured theoretical picture containing unobservables. Acknowledgements An early version of this article was presented at the 2016 annual meeting of the Canadian Society for the History and Philosophy of Science held at the University of Calgary, at which time I received valuable feedback. I also thank two anonymous reviewers for their extensive and detailed comments. References Achinstein P. [ 2001 ]: The Book of Evidence , New York : Oxford University Press . Chalmers A. [ 2011 ]: ‘Drawing Philosophical Lessons from Perrin’s Experiments on Brownian Motion: A Response to van Fraassen ’, British Journal for the Philosophy of Science , 62 , pp. 711 – 32 . Gouy L.-G. [ 1888 ]: ‘Note Sur le Mouvement Brownien ’, Journal de Physique Théorique et Appliquée , 7 , pp. 561 – 64 . Perrin J. [ 1910 ]: Brownian Movement and Molecular Reality , London : Taylor and Francis . Perrin J. [ 1916 ]: Atoms , New York : D. Van Nostrand . Perrin J. [ 1926 ]: ‘Discontinuous Structure of Matter’, in Nobel Lectures: Physics 1922–1941 , New York : Elsevier , pp. 138 – 64 . Poincaré H. [ 1913 ]: Mathematics and Science: Last Essays , New York : Dover . Psillos S. [ 2011a ]: ‘Making Contact with Molecules: On Perrin and Achinstein’, in Morgan G. J. (ed.), Philosophy of Science Matters: The Philosophy of Peter Achinstein , Oxford : Oxford University Press , pp. 177 – 90 . Psillos S. [ 2011b ]: ‘Moving Molecules above the Scientific Horizon: On Perrin’s Case for Realism ’, Journal for General Philosophy of Science , 42 , pp. 339 – 63 . Psillos S. [ 2014 ]: ‘The View from Within and the View from Above: Looking at van Fraassen’s Perrin’, in Gonzalez W. J. (ed.), Bas van Fraassen’s Approach to Representation and Models in Science , Dordrecht : Springer , pp. 143 – 66 . Salmon W. [ 1984 ]: Scientific Explanation and the Causal Structure of the World , Princeton, NJ : Princeton University Press . van Fraassen B. [ 2007 ]: ‘From a View of Science to a New Empiricism’, in Monton B. (ed.), Images of Empiricism: Essays on Science and Stances , Oxford : Oxford University Press , pp. 337 – 83 . van Fraassen B. [ 2009 ]: ‘The Perils of Perrin, in the Hands of Philosophers ’, Philosophical Studies , 143 , pp. 5 – 24 . © The Author(s) 2017. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

The British Journal for the Philosophy of ScienceOxford University Press

Published: Jul 4, 2018

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off