Add Journal to My Library
The British Journal for the Philosophy of Science
, Volume Advance Article – Jul 10, 2017

31 pages

/lp/ou_press/understanding-with-toy-models-MaR6xnTU8A

- Publisher
- Oxford University Press
- Copyright
- © The Author 2017. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com
- ISSN
- 0007-0882
- eISSN
- 1464-3537
- D.O.I.
- 10.1093/bjps/axx005
- Publisher site
- See Article on Publisher Site

Abstract Toy models are highly idealized and extremely simple models. Although they are omnipresent across scientific disciplines, toy models are a surprisingly under-appreciated subject in the philosophy of science. The main philosophical puzzle regarding toy models concerns what the epistemic goal of toy modelling is. One promising proposal for answering this question is the claim that the epistemic goal of toy models is to provide individual scientists with understanding. The aim of this article is to precisely articulate and to defend this claim. In particular, we will distinguish between autonomous and embedded toy models, and then argue that important examples of autonomous toy models are sometimes best interpreted to provide how-possibly understanding, while embedded toy models yield how-actually understanding, if certain conditions are satisfied. 1 Introduction 2 Embedded and Autonomous Toy Models 2.1 Embedded toy models 2.2 Autonomous toy models 2.3 Qualification 3 A Theory of Understanding for Toy Models 3.1 Preliminaries and requirements 3.2 The refined simple view 4 Two Kinds of Understanding with Toy Models 4.1 Embedded toy models and how-actually understanding 4.2 Against a how-actually interpretation of all autonomous toy models 4.3 The how-possibly interpretation of some autonomous toy models 5 Conclusion 1 Introduction Across the natural and social sciences, researchers construct very simple and highly idealized models, which the experts in a particular field of inquiry can cognitively grasp with ease. Following common terminology from the sciences, we call such models ‘toy models’—a term that is not meant to have belittling or derogatory connotations. Paradigmatic examples of toy models include the Ising model in physics, the Lotka–Volterra model in population ecology, and the Schelling model in the social sciences (Hartmann [1999]; Sugden [2000]; Weisberg [2013]). A useful characterization of toy models appeals to three essential features: First, models of this type are strongly idealized in that they often include both Aristotelian and Galilean idealizations (see Section 2.1 for details regarding this familiar distinction). Second, such models are extremely simple in that they represent a small number of causal factors (or, more generally, of explanatory factors) responsible for the target phenomenon. Third, these models refer to a target phenomenon (as opposed to, for instance, models of data; see Frigg and Hartmann [2012]). To be clear from the start, we do not claim that there is a sharp boundary between toy models and other models. Instead of any sharp distinction, there seems to be a continuum of models with respect to their degree of simplicity and, independently, their degree of idealization. If one compares toy models with more complex models representing a large number of causal factors responsible for the target phenomenon (such as complex models in climate science), then toy models are located at the ‘simple end’ of the spectrum. If one contrasts toy models with less idealized models (that is, models involving fewer idealized and more approximately true assumptions), then toy models are located at the ‘strongly idealized end’ of a continuous spectrum. Our characterization of toy models as simple and highly idealized does, of course, permit the existence of simple and less idealized, complex and highly idealized, and complex and less idealized models. In sum, the concept of a toy model is a vague concept. However, vagueness does not exclude that there are clear core examples of what is in the extension of a concept. We will discuss some core examples of toy models in this article. Idealizations and models have been major topics in the recent philosophy of science (Morgan and Morrison [1999]; Bailer-Jones [2009]; Frigg and Hartmann [2012]). It is, however, a curious fact that philosophers of science have not devoted sufficient attention to toy models, despite their apparently central role in many sciences (notable exceptions include Hartmann [1999]; Sugden [2000]; Strevens [2008]; Bailer-Jones [2009]; Grüne-Yanoff [2009], [2013]; Weisberg [2013]). Toy models are deeply puzzling because their strongly idealized and simple nature raises hard questions: To what end do scientists construct toy models? Why should one have any confidence in the claim that strongly idealized and simple models can be used for modelling any real social and natural phenomena? Or, even more provocatively, why should one believe that toy models are anything more than ‘mathematized science fiction’, giving us no more clues about real world phenomena than non-mathematized fairy tales? As a pessimistic response, one could be tempted to think that toy models are not very useful, because they cannot represent real (or actual) phenomena. To motivate this sort of scepticism, suppose that explanation and prediction are two central goals of modelling in science. We know as a matter of fact that toy models—as idealized models—are literally false of their intended target systems; or, put in semantic terms, toy models clearly do not accurately map onto their targets (for instance, a toy model is not isomorphic to its target, supposing that isomorphism is required for representation). Being false is a feature that, at least prima facie and according to standard accounts of explanation, undermines the explanatory character of a model, as the explanans of an explanation is required to be (approximately) true.1 Similar worries emerge with respect to the predictive use of toy models: the majority of toy models are not suited for precise quantitative predictions. Why should anyone trust the predictions generated by toy models if one knows that these predictions rest on false assumptions? Opposing a pessimistic attitude towards toy models, some philosophers have recently claimed that the epistemic goal of toy modelling is to obtain understanding of natural and social phenomena (Hartmann [1998]; De Regt and Dieks [2005]). By virtue of their simplicity, toy models enable scientists to retain a sort of epistemic access to scientific models (and the mathematical procedures for solving these models). The simplicity of toy models distinguishes them from other kinds of models—for instance, from complex models that can only be solved via computer simulation, such as climate models.2 Is the claim that toy models—being idealized models—yield understanding really warranted? Are toy models appropriate for achieving understanding? And if so, what kind of understanding do scientists get from toy models? The main goal of this article is to answer these questions by (i) distinguishing two kinds of toy models, (ii) developing an account of understanding that is adequate for toy models, and (iii) determining whether different kinds of toy models are better suited to different kinds of understanding. According to an alternative approach in the recent literature, simple and idealized models are portrayed as ‘minimal models’ (Strevens [2008]; Grüne-Yanoff [2009], [2013]; Weisberg [2013]; Batterman and Rice [2014]). However, although we are convinced that some toy models may be interpreted as minimal models, we will argue that other toy models are not best understood in terms of minimal models. We shall return to this minimalist interpretation (Section 4.2). The plan of the article is as follows: In Section 2, we present the distinction between embedded and autonomous toy models. In Section 3, we elaborate an account of understanding (the refined simple view). Our account of understanding is inspired by Strevens’s ([2013]) ‘simple view’ of understanding, and Bailer-Jones’s ([1997]) naturalistic account of the ‘subjective component’ of understanding (epistemic access). The refined simple view explicates two kinds of understanding: how-actually understanding and how-possibly understanding. In Section 4, we argue, first, that embedded toy models yield how-actually understanding if certain conditions hold (Section 4.1); second, that standard accounts of idealizations—such as McMullin’s strategy, minimalism, and dispositionalism—do not support the claim that all autonomous toy models provide how-actually understanding (Section 4.2); and, third, that there are some autonomous toy models that one best interprets as yielding how-possibly understanding (Section 4.3).3 2 Embedded and Autonomous Toy Models To analyse whether and how toy models yield scientific understanding, it is useful to introduce a distinction between two kinds of toy models: embedded and autonomous toy models. We will first illustrate the concept of an embedded toy model (in Section 2.1), and then turn to an illustration of autonomous toy models (in Section 2.2). We conclude this section with a brief qualification regarding the heterogeneity of autonomous toy models (in Section 2.3). 2.1 Embedded toy models Above we introduced toy models as models of phenomena. However, some toy models are also models in a different sense of the term ‘model’: they are also models of a theory. This observation will permit us to introduce a central distinction between embedded and autonomous toy models. Some toy models are embedded into an empirically well-confirmed theory. More precisely, embedded toy models are models of an empirically well-confirmed framework theory, while autonomous toy models—to which we will turn below—are not. This characterization of an embedded toy model relies on a familiar distinction from the philosophical literature on models and model theory in mathematics, namely, the distinction between a (framework) theory and models of the (framework) theory (Hartmann [1998]; Bailer-Jones [2009]; Frigg and Hartmann [2012], Section 1.3). In model theory, a theory is a set of uninterpreted sentences. When model theory is used to express a framework theory, this set of sentences includes, most prominently, the framework theory’s abstract calculus and its general laws. Models of a framework theory are taken to be structures in which the sentences of the framework theory (such as the theory’s abstract calculus and its general laws) are true (Frigg and Hartmann [2012], Section 1.3). Or, to state the same point in more precise model-theoretic terms, models of a framework theory consist of a domain of objects and an interpretation of the theory’s abstract calculus and the laws over the domain (Chang and Keisler [1990], Section 1.3; Bell and Slomson [1974], Section 3.2).4 Examples of empirically confirmed framework theories include Newtonian mechanics and quantum mechanics. Well-known examples of models of framework theories are the model of a pendulum and models of planetary motion (being examples of models of classical mechanics), and the standard model of particle physics (being a model of quantum mechanics). Models of a theory are constructed within a framework theory. Constructing such a model in order to represent a target phenomenon often requires moving beyond the resources of the framework theory, in making a number of specific assumptions about the target (Morgan and Morrison [1999]; Frigg and Hartmann [2012]). With these conceptual tools in mind, we are now in a position to characterize embedded toy models more precisely. Embedded toy models are models of a well-confirmed framework theory, and they are simple and idealized models of phenomena. Before we turn to a more detailed example of an embedded toy model, let us add a note on terminology. We use ‘idealization’ as an umbrella term for (at least) two general kinds of idealizations that are usually distinguished in the literature, namely, Aristotelian and Galilean idealizations (Frigg and Hartmann [2012], Section 1.1). A model involving an Aristotelian idealization strips away some feature(s) that the target system of the model in fact possesses (for instance, a model of a pendulum strips away the colour of the pendulum), or the model rests on the assumption that some causal factor actually influencing the target system is absent or ‘neutralized’ (Mäki [2011], p. 51). Aristotelian idealizations are also discussed in terms of ‘abstraction’ (Cartwright [1989]) and ‘isolation’ (Mäki [1992], [2011]; Hüttemann [2004], [2014]). By contrast, Galilean idealizations deliberately distort the target system, for instance, by making the assumption that agents are perfectly rational, that the number of animals in a population or the number of molecules in a gas goes to infinity, and so on (McMullin [1985]; Cartwright [1989]; Weisberg [2013]). Aristotelian and Galilean idealizations can co-occur in one and the same model. The examples of toy models we discuss tend to involve both kinds of idealization.5 In this article, our goal is not to highlight the differences between Aristotelian and Galilean idealizations. For this reason, we will merely speak of ‘idealizations’ as an umbrella term throughout the remainder of the article. Analogously, we will use the term ‘de-idealization’ to denote cases of both (Aristotelian) ‘de-isolation’ and (Galilean) ‘de-idealization’, to use Mäki’s ([2011], p. 48) terminology. What matters for our purposes is that modelling assumptions involving Aristotelian and Galilean idealizations assert something that is literally false of the target system (Cartwright [1983], p. 45). Or, put differently, models involving Aristotelian or Galilean idealizations prima facie do not accurately represent their targets (for instance, because they are not isomorphic to their targets, if that is what the theory of representation requires). Let us now turn to a concrete example of an embedded toy model. Consider Newtonian mechanics as a framework theory. This theory lays out a small number of general laws (Newton’s laws of motion) and it provides the scientist with guidelines for the construction of concrete models for specific systems or phenomena (Giere [1988]; Bailer-Jones [2009]). To study, for example, the motion of a single planet around the Sun in our solar system, a number of model assumptions have to be made. In the simplest case, one might want to study a system consisting of only the Sun and the planet under consideration. Let us call this simple model the ‘Sun-plus-one-planet model’. This model is a model of Newtonian mechanics—the Sun-plus-one-planet model is a structure in which the sentences of Newtonian mechanics (such as the theory’s abstract calculus and its general laws) are true. Moreover, if one analyses the Sun-plus-one-planet model as the model of a phenomenon (for instance, of the Earth orbiting around the sun), this model involves idealizations, because the modeller disregards the other planets, the moon(s), and other stellar objects that are known to exist. Moreover, the model refers only to gravitational interactions between the Sun and the planet. From Newton’s laws of motion and the model assumptions, one can then derive the orbit of the planet. In a simple calculation, which can be found in any textbook, one obtains that the orbit of the planet is (approximately) an ellipse with the Sun in one of the two foci. The Sun-plus-one-planet model is an embedded toy model, first, because it is a model of a framework theory, Newtonian mechanics, that is well confirmed, at least in a particular domain of application; second, because it is simple, as it describes few causal or explanatory factors (that is, a physical system of only two interacting bodies); third, because it is idealized (as it deliberately disregards the gravitational influence of other planets and refers only gravitational interaction); and, fourth, because it is a model of a phenomenon (that is, a target phenomenon such as the earth orbiting around the Sun). There are numerous examples of embedded toy models in physics (Morgan and Morrison [1999]) including the Ising model of non-relativistic quantum mechanics, and the φ4-theory of quantum field theory (with quantum field theory as the embedding framework theory). Hüttemann ([2014]) provides another illustrative example: he treats an oscillator and a rotator as embedded toy models with quantum mechanics as the embedding theory. The ideal gas law can be understood as the deductive consequence of a toy model that is embedded in statistical mechanics (Strevens [2008], Chapter 8; Dizadji-Bahmani et al. [2010]).6 We will restrict our discussion of embedded toy models to examples from physics. However, one may also find embedded toy models in other disciplines. In the life sciences, Fisher’s famous sex ratio model seems to be a toy model embedded into Darwinian evolutionary theory (Sober [1984], pp. 51–8). 2.2 Autonomous toy models Several well-known toy models are not embedded, that is, they are not models of a well-confirmed framework theory. We call toy models of this sort ‘autonomous’ toy models. Autonomous toy models share the simple and idealized character with their embedded cousins. The Schelling model of segregation and the Lotka–Volterra model of predator–prey population growth are paradigmatic examples of autonomous toy models (Schelling [1971]; Sugden [2000]; Weisberg [2013]). 2.2.1 A familiar paradigm: Schelling’s model of segregation A paradigmatic autonomous toy model is Thomas Schelling’s model of segregation. Schelling ([1971]) developed a famous toy model of the phenomenon of racial (and other kinds of) segregation (Sugden [2000]; Weisberg [2013]). Racial segregation is a general kind of phenomenon that is contingently instantiated in actual, or real-world, cities such as in Chicago and Detroit. Schelling’s model works with three simple assumptions: first, that two sorts of agents (for instance, black and white agents) live in a very sparse environment (a two-dimensional grid); second, that the agents are randomly distributed on the grid to begin with; and, third, that the agents interact in accord with a simple behavioural rule (for instance, each agent moves to an empty spot in his or her neighbourhood on the grid, if less than about 30% of his or her neighbours do not have his or her colour). If one starts with randomly distributed agents (Figure 1, on the left), then running this simple model by reiterating the behavioural rules leads to the emergence of segregation after a small number of steps (Figure 1, on the right). Schelling took the model to explain that racial segregation can occur even if the agents do not have strongly and explicitly racist attitudes (but merely conform to the 30% rule) and would actually prefer to live in non-segregated cities. The model also allows us to consider the consequences of varying initial conditions and the rules. Figure 1. View largeDownload slide The Schelling model of segregation. The left-hand side shows a random distribution of agents. The right-hand side depicts a pattern of segregation. Figure 1. View largeDownload slide The Schelling model of segregation. The left-hand side shows a random distribution of agents. The right-hand side depicts a pattern of segregation. The Schelling model has racial segregation as its target phenomenon. As stated above, racial segregation is general kind of phenomenon that is contingently instantiated, for instance, in Chicago. If one takes the Schelling model to apply to particular instantiations of racial segregation (for instance, the racial segregation in Chicago in the 1960s), then the rules and other modelling assumptions are simplified and idealized to such an extent that they do not accurately represent, say, the preferences of the actual inhabitants of Chicago’s highly segregated neighbourhoods in the 1960s or in 2016. The model is simple in assuming a very sparse environment (a grid) and agents that are characterized by very few properties (most importantly, by their colour and a behavioural rule). The model is idealized as follows: First, each agent is assumed to know how many agents of each colour live in his or her environment. Second, every agent is assumed to be able to move whenever he or she is dissatisfied with the colour of his or her neighbours. Third, social and economic factors (such as education and income) are taken not to make a difference all. Fourth, the inhabitants of, say, Chicago and Detroit are assumed to have never been randomly distributed, and so on (Schelling [1971], p. 149). Moreover, the Schelling model is not embedded into an empirically confirmed framework theory. In sum, we conclude that the Schelling model of segregation is an autonomous toy model. 2.2.2 A novel case from econophysics: the DY-model Econophysics is a fairly young discipline that exploits mathematical models from statistical physics in order to understand economic phenomena. One important class of econophysical models, ‘collision models’, depict economic exchanges among agents in analogy with collisions of molecules in a gas, as described by statistical mechanics. One influential and successful collision model in econophysics is the Drǎgulescu and Yakovenko ([2000]) model (DY-model, for short). The model is taken to successfully capture important qualitative features of the distributions of individual monetary incomes found in many real economies—in particular, the ‘stylized fact’ that these income distributions are exponential distributions with a power law tail. These features of income distributions are the target phenomenon for the DY-model and, indeed, the DY-model successfully captures this phenomenon (for an in-depth discussion of the technical details and the influence of the DY-model in econophysics, see Thébault et al. [forthcoming]). The starting point for the DY-model is a population of ‘zero-intelligence’ agents. These agents have a single property: their money. The agents lack preferences, expectations, rationality, and other properties of ‘real’ agents, at least as portrayed by mainstream economics. At any given time t, agent i is associated with a single property, their monetary income, mi(t) (which is always non-negative, so debt is not allowed). In the DY-model, one first assumes a large population (that is, N agents, with N≫1), and then randomly selects two individuals at some time t. For a selected pair of agents, the initial pre-interaction state can be characterized completely in terms of two numbers: mi(t), which is the income of agent i at time t; and mj(t), which is the income of agent j at time t. The DY-model treats all interactions in the population in terms of binary exchanges of money, in the same way as the kinetic theory of gases allows one to treat the interaction between molecules in a gas in terms of binary exchanges of kinetic energy (Thébault et al. [forthcoming]). Another crucial assumption in the DY-model is that both the total number of agents, N, and the total amount of money, M=∑imi(0), are held fixed. That is, ∑imi(0)=∑imi(t) for all t. All of the assumptions regarding zero-intelligence agents, a restriction to binary interactions, and the conservation of the total number of agents and the total amount of money are clearly idealizations. Moreover, the DY-model is not a model of a well-confirmed framework theory, because statistical mechanics is not a well-confirmed theory for the domain of economic processes. In particular, the DY-model does not include general dynamical laws of a well-confirmed framework theory (for the relevant domain of application, that is, economic processes) that describe how the initial conditions of the agents determine the nature of the collisions between agents. The DY-model instead rests on a formal analogy with certain aspects of statistical mechanics.7 However, rather than drawing on some well-confirmed framework theory, the DY-model pictures the agent–agent ‘collision’ with a simple exchange mechanism, such that all the money of the two agents is pooled, and then a random fraction is given to one, and the rest to the other (Figure 2). This simple exchange mechanism thus leads to a post-interaction state characterized by: mi(t+1)=mi(t)+Δm, (1) mj(t+1)=mj(t)−Δm, (2) where Δm=εij[mi(t)+mj(t)]−mi(t), (3) with εij a random variable uniformly distributed between 0 and 1, varying with each discrete time-step, and labelled by the index of the two agents in the interaction (that is, agents i and j). Figure 2. View largeDownload slide The exchange dynamics of the DY-model, from (Thébault et al. [forthcoming]). Figure 2. View largeDownload slide The exchange dynamics of the DY-model, from (Thébault et al. [forthcoming]). In sum, we classify the DY-model as an interesting and novel exemplar of an autonomous toy model because, first, the DY-model relies on a strikingly simple exchange mechanism. Second, it is idealized in assuming ‘their money’ as the agents’ only characterizing property, that many-agent interactions do not occur, that the number of agents and the amount of money is conserved, and so on. Third, it has a target phenomenon (specific qualitative features of income distributions). Finally, the DY-model is not embedded into an empirically confirmed framework theory. 2.3 Qualification Although the distinction between autonomous and embedded toy models is sharp, the class of autonomous toy models is quite heterogeneous. This heterogeneity exists because some autonomous models seem to bear no relevant relation to a well-confirmed framework theory (such as the Schelling model and the DY-model, or so we assume). However, other autonomous toy models are non-trivially associated with a highly confirmed framework theory, but the toy models in question are not a model of that framework theory. Let us briefly present an example illustrating the latter case, the MIT bag model. For the MIT bag model, the relevantly associated (but not embedding) framework theory is quantum chromodynamics (QCD), which is extremely hard to solve in the low-energy domain (Hartmann [1999]). Here, QCD can only be solved using high-powered computer simulations. These computational models and computer simulations function like a black box and are, hence, not easy to grasp and understand (we will return to the notion of grasping below). The MIT bag model, on the other hand, identifies one crucial feature of QCD: it identifies quark confinement as QCD’s key feature and models a hadron as a hard sphere in which quarks move freely (Figure 3 depicts the ‘bag’, the freely moving quarks in it, and arrows representing the confining force). It is important for our purposes that the MIT bag model is not a model of QCD. The model is instead inspired by QCD and it is ultimately justified by a story that connects the model to QCD, as Hartmann ([1999], [2001]) argues. Figure 3. View largeDownload slide The MIT bag model, from (Hartmann [1999], p. 336). Figure 3. View largeDownload slide The MIT bag model, from (Hartmann [1999], p. 336). However, in this article, we will focus our discussion on embedded toy models and on autonomous toy models that are not connected to a theory via a story. We will leave it for future research to address the question what kind of understanding toy models such as the MIT bag model provide. 3 A Theory of Understanding for Toy Models Back to our main question: do toy models yield understanding? And, moreover, is our taxonomy of autonomous and embedded toy models helpful for answering this question? One promising and straightforward way to approach these questions is to ask whether there is any convincing philosophical account of understanding that applies to autonomous and embedded toy models. We take Henk De Regt and Dennis Dieks’s influential account of scientific understanding, as developed in (De Regt and Dieks [2005]; De Regt [2009]), as our starting point. Presenting their account (in Section 3.1) will primarily serve as a convenient way to bring out a number of common assumptions in several current accounts of understanding, and to motivate the account of understanding we will adopt, namely, the refined simple view (Section 3.2). 3.1 Preliminaries and requirements According to De Regt and Dieks, ‘a phenomenon P can be understood if a theory T of P exists that is intelligible (and meets the usual logical, methodological, and empirical requirements)’ (De Regt and Dieks [2005], p. 150; De Regt [2009], p. 32). Although De Regt and Dieks restrict their definition to theories, their approach is intended to be more permissive, since they also refer to models as vehicles of understanding. One of their central examples is a toy model, the MIT bag model (De Regt and Dieks [2005], pp. 155–6). Let us examine De Regt and Dieks’s ([2005]) and De Regt’s ([2009]) necessary conditions for understanding more closely: the explanation condition, the intelligibility condition, and third, the ‘usual logical, methodological, and empirical requirements’. First, De Regt explicitly ties understanding to explanation: understanding a phenomenon is characterized as ‘having an adequate explanation of the phenomenon’ and a ‘phenomenon P is understood scientifically if a theory of P exists that is intelligible (and the explanation of P by T meets accepted logical and empirical requirements’ (De Regt [2009], p. 32, emphasis added). Hence, we take it that having an explanation of P is a necessary condition for understanding P, according to De Regt. We will refer to this condition as the ‘explanation condition’. Second, De Regt and Dieks ([2005], p. 151) define theory T as being intelligible for scientists ‘if they can recognise qualitatively characteristic consequences of T without performing exact calculations’. They argue, for example, that physicists consider the kinetic theory of gases to be intelligible if and only if the physicists are able to infer statements from the kinetic theory ‘without performing exact calculations’, such as the following statement: ‘if one adds heat to a gas in a container of constant volume, the average kinetic energy of the moving molecules—and thereby the temperature—will increase’ (De Regt and Dieks [2005], p. 152). The intuition motivating the intelligibility requirement is that ‘in contrast to an oracle […] we want to be able to grasp how the predictions are generated, and to develop a feeling for the consequences the theory has in concrete situations’ (De Regt and Dieks [2005]. p. 143). Third, what do De Regt and Dieks have in mind when referring to ‘the usual logical, methodological, and empirical requirements’? Although they do not make this point explicit, we presume that they refer to familiar virtues of scientific theories (or criteria for theory choice). Thomas Kuhn’s ([1977]) paper is the locus classicus for an assessment of the characteristics of a good scientific theory: ‘These five criteria—accuracy [corresponding to empirical adequacy], consistency, scope, simplicity, and fruitfulness—are all standard criteria for evaluating the adequacy of theory’ (Kuhn [1977], p. 322). For this reason we refer henceforth to these requirements as ‘Kuhnian criteria’ for good scientific theories. De Regt ([2009]) explicitly affirms this reading: the Kuhnian criteria determine the ‘goodness’ of theory T (or model M) on which the explanation of some target phenomenon T is based (De Regt [2009], p. 32). De Regt and Dieks’s main motivation for demanding that intelligibility per se is not sufficient for scientific understanding is that, for instance, astrology should not count as providing scientific understanding, because despite being intelligible, astrology fails to be a good theory (or model) if judged by Kuhnian criteria (De Regt and Dieks [2005], p. 150). De Regt and Dieks’s account of understanding is one of many possible starting points in the large literature on understanding. However, what matters here is that their account is, in several respects, a typical account of scientific understanding. We will focus on the explanation and intelligibility conditions, to bring out a number of common assumptions in several current accounts of understanding, including early approaches by Friedman ([1974]) and Kitcher ([1981]); more recently, by Trout ([2002]) and Strevens ([2008], [2013]), among many others; and by the contributors in (De Regt et al. [2009]). We will put aside the ‘the usual logical, methodological, and empirical requirements’ (the Kuhnian criteria). The common assumptions of these accounts of understanding can be characterized as follows. An individual scientist understands some phenomenon, P, only if three conditions are satisfied: Explanation condition: There is a scientific explanation of P. Philosophers concerned with understanding often differ with respect to their preferred theory of explanation. They use different theories of scientific explanation, such as the covering-law account, the unification account, pragmatic accounts, and various causal accounts of explanation.8 Veridicality condition: In asserting that the understanding of phenomenon P involves an explanation of P as a necessary condition, accounts of understanding inherit a feature of theories of explanation that we call the ‘veridicality condition’. It is a common view that explanatory assumptions (that is, the explanans of an explanation) are required to be true or, at least, approximately true. Consider the following examples: Proponents of the causal accounts typically endorse the requirement that the explanans truthfully represent the causes of the explanandum phenomenon. For instance, Woodward holds that the explanans has to ‘be true or approximately so’ (Woodward [2003], p. 203; Woodward and Hitchcock [2003], p. 6), and Strevens endorses the claim that the explanans is ‘a veridical causal model’ ([2008], p. 71) consisting of true causal laws and true statements about initial conditions. Moreover, Hempel’s ([1965], pp. 248, 338) covering law account demands that the explanans consist of true law statements and true statements about initial conditions. Unificationist accounts (Kitcher [1981], p. 519) and pragmatic accounts (Van Fraassen [1980], p. 143) also impose a veridicality constraint on the explanans.9 Epistemic accessibility condition: If an individual scientist understands phenomenon P, then he or she has epistemic access to an explanation of P. De Regt and Dieks’s concept of intelligibility is one possible strategy for making precise epistemic accessibility—to have epistemic access to a (toy) model, for them, just is being able to recognize qualitatively characteristic consequences of that model without performing exact calculations. The differences between many competing accounts of understanding consist in alternative ways of spelling out each of these three conditions. Although De Regt and Dieks’s view is surely a useful starting point, their account says little about how understanding based on idealized models is possible. However, this is precisely the question we are concerned with. If one adopts De Regt and Dieks’s account, toy models (and other idealized models) are problematic in at least one respect: generally, toy models do not satisfy the veridicality condition. For instance, the DY-model is idealized, as it assumes that economic agents are all identical, have no expectations, and have ‘zero intelligence’. This is certainly an assumption that we deem (and surely hope) to be literally false. Hence, it is at least questionable whether the veridicality condition is met (we will return to the interpretation of idealizations in Section 4). This observation raises a challenge if one seeks an account of understanding applicable to toy models: such an account of understanding should accommodate idealizations. Where does this leave us? One reaction to this challenge might be to revise De Regt and Dieks’s account. We adopt an alternative strategy: we will argue that the refined simple view is an account of understanding that provides a strategy for addressing the challenge stemming from idealized models. 3.2 The refined simple view Strevens's account of understanding offers a promising strategy for avoiding the challenge from idealized models. According to this ‘simple view’, scientific understanding is defined as follows: ‘An individual has scientific understanding of a phenomenon just in case they grasp a correct scientific explanation of that phenomenon’ (Strevens [2013], p. 510; see also Strevens [2008], p. 3).10 The notion of grasping is Strevens’s way of articulating the epistemic accessibility condition. Strevens does not provide an informative definition of the concept of grasping. Instead, he takes grasping to be a ‘fundamental relation between mind and world, in virtue of which the mind has whatever familiarity it does with the way the world is’ ([2013], p. 511). One may be concerned about the fact that grasping is taken as a primitive. We will return to this issue shortly. What is central for our present concerns is that the simple view offers a strategy for dealing with the challenge from idealizations. Strevens ([2013], p. 512) is fully aware of the fact that the simple view cannot be applied to idealized models straightforwardly. The reason is that toy models are taken to be ‘literally’ false, while the simple view—implying the veridicality condition—requires them to be true. As Strevens points out, most standard theories of explanation require that the explanans be true or ‘veridical’. For instance, his own kairetic account of explanations, in its simplest form, requires that the explanans consist of true causal laws and true statements about initial conditions (Strevens [2008], pp. 71–2).11 Strevens accounts for idealized models in the following way: although idealized statements are literal falsehoods (his terminology), these statements can be re-interpreted—by using an account of idealizations—as being (approximately) true, that is, veridical. His specific account of idealizations appeals to an ‘optimizing procedure’ (one vital component of his kairetic account) whose function is to filter out, or to ignore, explanatorily irrelevant information that need not be explicitly stated in the explanans (Strevens [2008], pp. 96–101). Making use of this idea of ignoring explanatorily irrelevant information, Strevens ([2008], Chapter 8, [2013], p. 512) develops a minimalist account of idealizations, arguing that the minimalist account implies a veridical reading of idealized assumptions: idealized assumptions truthfully (that is, veridically) report which factors are irrelevant for the explanation at hand. The general lesson from Strevens’s minimalist approach is that if understanding involves idealized assumptions, then these assumptions ought to be re-interpreted in a veridical way. Strevens's own approach does not, however, exhaust the options. Other interpretations of idealizations include McMullin’s strategy, dispositionalist interpretations, and how-possibly interpretations. We will discuss the minimalist view and its alternatives in Section 4. Inspired by Strevens ([2013]), we start with the following working definition of the concept of scientific understanding: The Simple View: An individual scientist, S, understands phenomenon P via model M if and only if model M explains P and S grasps M. Let us refine this working definition in four ways. First, naturalism about grasping: If qualified properly, we are willing to follow Strevens in assuming that grasping is a ‘fundamental relation between mind and world’. For present purposes, we are prepared to accept that the notion of grasping is philosophically primitive, but not scientifically primitive. What does it mean to take the notion of grasping as philosophically but not scientifically primitive? As Bailer-Jones ([1997], p. 122) instructively points out: […] understanding has a subjective component, in addition to the publicly accessible component represented by explanation, in the sense that understanding takes place in an individual’s mind. Following Bailer-Jones, we adopt a naturalistic approach to this subjective component of understanding: what grasping turns out to be is a scientific matter, not a philosophical matter, and the subjective component of understanding can be studied by cognitive science. For example, cognitive science tells us that grasping toy models sometimes consists in being able to visualize the behaviour of the target system of a scientific toy model or to have a ‘mental model’ of the toy model and its solutions.12 Visualization and having a mental model are possible ways, according to cognitive science, in which the grasping of a toy model can be realized.13 Second, the contextual character of understanding: Understanding a phenomenon is contextual. Some model in, say, population ecology may generate understanding for an expert in this field, but not for an expert in statistical physics nor for a layperson. We agree with De Regt and Dieks ([2005]) in assuming that the individuals who gain scientific understanding are experts regarding the kind of phenomenon that is understood. We express this thought by saying that an individual scientist, S, understands phenomenon P via model M in context C, where context C is a scientific discipline and S has expert knowledge of that discipline. Third, different modalities of explanation and understanding: The kind of explanatory information scientists receive from toy models is not always the same, or so we will argue in Section 4. It is useful to distinguish two different modalities of explanation: how-actually explanations and how-possibly explanations (Hempel [1965]; Grüne-Yanoff [2009], [2013]). How-actually explanations possess an explanans satisfying the veridicality condition, that is, that consists of actually true (or approximately true) statements. The explanans of how-possibly explanation refer to merely possible explanatory factors (for instance, to possible causes and mechanisms bringing about the explanandum phenomenon, if the explanation is causal). The distinction between how-possibly and how-actually explanations can be accounted for by, and integrated into, many standard accounts of explanation—for instance, the covering-law account and various causal accounts of explanations, such as Woodward’s seminal theory of causal explanation and Strevens’s kairetic account. Fourth, neutrality with respect to different theories of explanation: In this article, we do not want to take a particular stance on which account of explanation is the most adequate one. Like Strevens ([2013], p. 510), we do not wish to tie an account of understanding to one specific theory of explanation. We rather assume here that toy models have explanatory power of either a how-actually or a how-possibly kind, and that there is a philosophical account of explanation that applies to toy models (for instance, by identifying merely possible or actual causes if a causal account is adequate, and so on).14 Taking these four refinements into account, we arrive at a refined version of the ‘simple view’ that enables us to distinguish two sorts of understanding. The refined simple view states: The Refined Simple View: Scientist S understands phenomenon P via model M in context C if and only if one of the following conditions holds: Scientist S has a how-actually understanding of phenomenon P via model M in context C if and only if model M provides a how-actually explanation of P and S grasps M. Scientist S has a how-possibly understanding of phenomenon P via model M in context C if and only if model M provides a how-possibly explanation of P and S grasps M. Being able to distinguish between how-actually understanding and how-possibly understanding will prove to be central in our discussion of understanding in the context of autonomous toy models (Section 4.3). In sum, the refined simple view is a promising candidate for analysing the kind of understanding that scientists acquire through toy models, because the refined simple view has room for understanding via idealized models. 4 Two Kinds of Understanding with Toy Models If one accepts the refined simple view, do scientists obtain understanding with toy models? In this section, we will provide an answer in the form of the following three claims: An embedded toy model, M, yields how-actually explanations if two conditions hold: (a) the (well-confirmed) embedding framework theory, T, permits an interpretation and justification of the idealizations of M, and (b) this interpretation and justification is compatible with the veridicality condition. If one grasps the how-actually explanation provided by an embedded toy model satisfying the conditions (a) and (b), then one has how-actually understanding (see Section 4.1). Some autonomous toy models do not provide a how-actually understanding because major interpretations of idealizations (McMullin’s approach, minimalism, and dispositionalism) do not support an interpretation and justification of the relevant idealizations of these toy models compatible with the veridicality condition. In other words, major interpretations of the relevant idealizations do not support the claim that all autonomous toy models provide how-actually understanding (see Section 4.2). There are central examples of autonomous models that are best interpreted as providing how-possibly explanations and how-possibly understanding. This sort of understanding is valuable because it has (what we call) important modal, heuristic, and pedagogical functions in scientific research and science education (see Section 4.3). 4.1 Embedded toy models and how-actually understanding An embedded toy model, M, provides how-actually understanding if the following two (sufficient) conditions hold, or so we argue: first, the (well-confirmed) embedding framework theory, T, permits an interpretation and justification of the idealizations of M; second, this interpretation and justification is compatible with the veridicality condition. To see why, consider once more our example of the Sun-plus-one-planet toy model. Recall from Section 2.1 that the Sun-plus-one-planet model is an embedded toy model, because it is the model of an empirically well-confirmed framework theory (Newtonian mechanics)—at least within an appropriately restricted domain of application; it is simple (as it describes, for instance, a physical system of only two interacting bodies); it is idealized (as it deliberately disregards the influence of other planets and only takes into account their gravitational interaction); and it is a model of an actual target phenomenon (such as the earth orbiting around the Sun). The capacity of this model to provide how-actually understanding depends on two conditions: In the case of this particular model, one possible way to interpret and justify idealizations is McMullin’s ([1985]) account of idealizations. Following McMullin’s strategy, we can consider the Sun-plus-one-planet model to be at least approximately true of the target system, namely, the real orbit of the earth around the sun. The idealizations of the model are justified pragmatically. That is, the purpose of idealizing is to turn the calculation of the orbit into a mathematically tractable problem. McMullin’s strategy is compatible with the veridicality condition. We can take the Sun-plus-one-planet model to be approximately true of the target system. Moreover, the Sun-plus-one-planet model can ultimately be de-idealized. Newtonian mechanics provides the theoretical resources for constructing a de-idealized model that generates more accurate predictions about the target than the toyish Sun-plus-one-planet model. The framework theory is a scientist’s guide to including the other planets and the moon(s), and to eventually calculating the improved orbit of the planet under consideration. Hence, in this particular case, the framework theory functions as a guide to de-idealization; for a more detailed exposition and discussion of this approach, see (McMullin [1985]; also Weisberg [2013], p. 99). Hence, McMullin’s strategy provides a way to defend the veridicality condition by removing the idealizations. Regarding condition (2), we hasten to stress that McMullin’s strategy is, of course, not the only way to interpret and justify idealizations. (We also do not endorse the strong claim that it is possible to de-idealize each and every idealization.) In fact, other interpretative and justificatory strategies can be adopted to complement McMullin’s strategy. In particular, we take minimalist and dispositionalist accounts of idealizations to be promising complements, neither of which necessarily involves de-idealization. We will turn to a more detailed exposition of dispositionalism and minimalism in Section 4.2. We will argue that these accounts of idealizations do not warrant the claim that all autonomous toy models provide how-actually understanding. However, regarding embedded toy models, we adopt a different dialectic strategy: for the sake of brevity, we mainly rely on the works of others who have convincingly argued that dispositionalism and minimalism apply to embedded toy models, if McMullin’s strategy does not. For instance, we assume as uncontroversial Strevens’s ([2008]) minimalist approach to idealizations in statistical mechanics, Hüttemann’s ([2004], [2014]) argument in favour of dispositionalism regarding certain idealizations in quantum mechanics, and Thébault et al.’s ([forthcoming]) appeal to a combination of minimalist and dispositionalist strategies in the context of statistical mechanics. Ultimately, our argument does not depend on the strong assumption that all idealizations figuring in embedded toy models can be given a veridical interpretation. Our main concern is merely a conditional one: embedded toy models yield how-actually understanding if some strategy for interpreting and justifying idealizations is applicable and the result of applying the strategy is compatible with the veridicality condition. In sum, we take embedded toy models to provide how-actually explanations if the (well-confirmed) framework theory provides the means to interpret and justify the idealizations of the embedded toy model (for instance, by appealing to McMullin’s strategy, minimalism, or dispositionalism), and this interpretation and justification does not violate the veridicality condition. Although a discussion of further examples clearly exceeds the scope of this article, we are confident that the same treatment applies to other examples of embedded toy models (see Section 2.1), such as the Ising model, the φ4-theory, and perhaps Fisher’s sex ratio model. 4.2 Against a how-actually interpretation of all autonomous toy models One might hold the view that not only embedded but also autonomous toy models yield how-actually understanding and how-actually explanations. However, major accounts of idealizations (McMullin’s strategy, minimalism, and dispositionalism) do not support the claim that all autonomous toy models provide how-actually understanding. Prima facie, one possible way to argue for the claim that autonomous toy models yield how-actually understanding consists in relying on McMullin’s strategy. However, this strategy heavily depends on the existence of a more general and well-confirmed framework theory guiding the de-idealization process. But it is precisely this theory that does not exist in the case of autonomous models. For this reason, McMullin’s approach is a non-starter for someone who wishes to defend the claim that autonomous toy models provide how-actually understanding. (But see below for a qualification regarding different de-idealization strategies in the context of autonomous toy models.) In the current literature, there are two main alternatives to McMullin’s strategy: minimalism and dispositionalism. If applicable, both minimalism and dispositionalism entail that idealized (toy) models provide how-actually information about their target systems. We will first introduce the two accounts of idealizations, and then determine whether these accounts are applicable to all autonomous toy models. Minimalism: The minimalist view of idealized models is one promising strategy for supporting a how-actually interpretation of autonomous toy models. As introduced in Section 3.2, Strevens relies on the minimalist view to apply his simple view of understanding to idealized models (Strevens [2008], Chapter 8, [2013], p. 512).15 According to minimalism, idealized models truthfully represent two kinds of facts: (a) facts about a minimal set of explanatorily relevant factors including true causal laws and true statements about initial conditions (which are determined by Strevens’s optimizing procedure), and (b) the fact that some factors are not explanatorily relevant (Strevens [2008], pp. 315–29; Weisberg [2013], pp. 100–3). According to Strevens, idealized assumptions represent the latter kind of fact. If the minimalist interpretation applied to autonomous toy models, then such models would provide how-actually understanding about the minimal set of factors explaining the target phenomenon. Dispositionalism: According to dispositionalism, an idealized model truthfully represents the dispositional behaviour of a (physical, biological, or economic) system if other disturbing causes were absent (Cartwright [1989]; Hüttemann [2014]). That is, an idealized assumption describes a counterfactual situation in which a particular factor is taken to be absent (although it frequently occurs in actual situations) and the target system is isolated from the influence of that particular factor. If the dispositionalist interpretation is applied to autonomous toy models, then such models would provide how-actually understanding about the actual disposition of the target system.16 Now, let us check whether minimalism or dispositionalism can be applied to autonomous toy models. Let us consider minimalism first. Strevens ([2008], Section 8.3) argues that the idealizations figuring in statistical mechanics (embedding the ideal gas law) can be interpreted in accord with the minimalist interpretation, that is, as statements about what does not make a difference for the occurrence of the target phenomenon. For present purposes, we have no qualms with this particular example. However, the same does not seem to hold for the Schelling model and the DY-model—our examples of autonomous toy models. If minimalism were true of the Schelling model and the DY-model, then all of the idealized modelling assumptions would have to refer to explanatorily irrelevant factors. But it is far from clear that this is the case. Regarding the Schelling model, one cannot simply hold without further argument that the following modelling assumptions, among others, refers to explanatorily irrelevant factors: that every agent knows how many agents of each colour live in their environment, and that social and economic factors do not have an influence on segregation. In the case of the DY-model, it seems to be explanatorily relevant, contrary to minimalism, whether many-agent interactions (as opposed to binary interactions) occur, whether the quantity of money is indeed conserved, and whether agents exchange all (as opposed to some) of their money (Thébault et al. [forthcoming]). In sum, we think that minimalism does not straightforwardly apply to two paradigm instances of autonomous toy models. Thus, it cannot be used to defend the claim that all autonomous toy models provide how-actually understanding.17 (This result is, of course, not an objection to minimalism as an account of idealizations.) Now let us turn to dispositionalism. A dispositionalist asserts that, for instance, the DY-model describes a disposition of agents to behave in the absence of many-agent interactions and of the rational expectations of agents. Similarly, a dispositionalist takes the Schelling model to apply if the causal influence of certain economic factors (such as income) were absent and if there were no ignorance about the colour of other agents the neighbourhood of each agent. Unlike the minimalist, the dispositionalist is not committed to the claim that, for instance, many-agent interactions or economic factors are explanatorily irrelevant. However, dispositionalists face another problem: they have to justify how the DY-model is applicable in ‘non-ideal’ situations, that is, those actually quite frequent situations where many-agent interactions occur in economic exchanges, agents have (more or less) rational expectations, economic factors make a difference to segregation, and so on. Meeting this challenge is difficult in the case of the autonomous toy models, because, unlike in the case of embedded toy models, there are no general dynamical laws (of a framework theory) that might help us to determine what will happen if these ‘disturbing factors’ are present and, thereby, guide our application of the model in a non-ideal situation. Hüttemann ([2014]) presents a dispositionalist response to this problem of non-ideal situations, invoking ‘laws of interaction’ and ‘laws of composition’. This response works well for many embedded toy models—indeed, Hüttemann’s main examples for laws of interaction and composition are part of quantum mechanics as a framework theory. However, autonomous toy models are typically not equipped with such laws. For this reason, Hüttemann’s defence of dispositionalism does not carry over to autonomous toy models, at least not in general. Thus, dispositionalism about idealizations cannot be used to defend the claim that all autonomous toy models provide how-actually understanding. In sum, there are some autonomous toy models for which neither minimalism nor dispositionalism can be exploited in order to support the view that these autonomous toy models yield how-actually understanding. Let us qualify this claim in two ways. First, we emphasize that we endorse an existential claim: there are some autonomous toy models that do not provide how-actually understanding because McMullin’s strategy, minimalism, and dispositionalism do not support the claim that all autonomous toy models provide how-actually understanding. We do not defend the stronger claim that no autonomous toy model yields how-actually understanding.18 Second, we do not claim that it is impossible to de-idealize autonomous toy models. In fact, we will briefly discuss an attempt to de-idealize the DY-model in Section 4.3. What matters for our purposes is that the de-idealization in the case of autonomous toy models is not guided by an embedding framework theory; de-idealization, in this context, is instead a matter of empirically readjusting a model. Autonomous toy models often serve the heuristic purpose of constructing more ‘realistic’, and often (but not necessarily) more complex, models of the target phenomenon. However, in cases where the construction of these models is possible, they tend to lose their ‘toy’—that is, idealized and simple—character. In particular, the gain in complexity has an interesting effect: it diminishes the capacity of these models to provide understanding because it is mainly the simplicity of toy models that permits scientist to grasp them. Consider the following example for illustrating this point: The sociologist Peter Hedström ([2005]) developed an empirically calibrated agent-based model to capture the phenomenon of unemployment in the Stockholm metropolitan area during the period of 1993–97. This agent-based model is autonomous and it is intended to be a realistic (that is, containing relatively few idealizations) and complex model: the number of agents in this model is 87,924 and their states (such as age, marital status, previous unemployment experiences, immigration background, and so on) are supported by demographic data about twenty to twenty-four years olds in the Stockholm metropolitan area in the time period at issue. According to the rules of this model, an agent changes his or her state (from unemployed to employed) depending, for instance, on how many of their neighbours are unemployed. Hedström’s model is autonomous, but it is not simple and not strongly idealized (if compared to the Schelling model). Its lack of simplicity makes it unlikely (if not impossible) for researchers to cognitively ‘grasp’ the model. Hence Hedström’s model does not satisfy one necessary condition for understanding. One reaction to our line of argument might be to improve minimalism and dispositionalism, and to argue that these improved accounts of idealizations do in fact apply to all autonomous toy models. We have no proof that this strategy must be unsuccessful. However, we believe that the attempt to apply minimalism and dispositionalism to some autonomous toy models faces problems that are serious enough to motivate exploring an alternative approach. This approach rejects the idea that all autonomous toy models provide how-actually understanding. Autonomous toy models sometimes yield another kind of understanding: how-possibly understanding. 4.3 The how-possibly interpretation of some autonomous toy models Let us suppose that there is a considerably large class of autonomous toy models that cannot be interpreted as providing how-actually understanding—for the reasons given in Section 4.2. We hold that the Schelling model and the DY-model are members of this class. If some autonomous toy models fail to provide how-actually understanding, what kind of understanding do they provide, if any? Our proposal is to take those autonomous toy models to yield ‘how-possibly understanding’. Applying the refined simple view, scientist S has how-possibly understanding of phenomenon P by using an autonomous toy model M in context C if and only if M provides a how-possibly explanation of P and S grasps M. For instance, we take the Schelling model to explain how it is possible that racial segregation occurs; and we take the DY-model to explain how it is possible that income distributions with specific qualitative features emerge. Both models only provide a potential explanation of a general pattern (that is, segregation and a certain kind of income distribution) and this pattern happens to be actually instantiated (for instance, the pattern of segregation is actually instantiated in Detroit, and a certain kind of income distribution is contingently instantiated in the USA). Both models (and the evidence we have) do not tell us whether they have correctly identified the actually relevant explanatory factor(s).19 The question arises why scientists are interested in how-possibly understanding, as one appears to gain considerably less from how-possibly than from how-actually explanations. De Regt and Dieks, for example, are very quick in dismissing how-possibly understanding as ‘mere intelligibility’ (which is clearly intended as a derogatory term) and in taking how-possibly understanding to be (necessarily) on par with pseudo-science, such as astrology (see Section 3.1). In fact, how-possibly understanding plays a central and legitimate role in research and in science education. More precisely, we hold that there are at least three central epistemic functions of how-possibly understanding: the modal function, the heuristic function, and the pedagogical function. We will now describe each of these functions in more detail. Modal function: How-possibly explanations are valuable if the phenomenon to be understood is a modal phenomenon—that is, if scientists want to understand whether and why some phenomenon is possibly or necessarily the case (Grüne-Yanoff [2013], pp. 855–9; Weisberg [2013], pp. 118–19; Cuffaro [2015]). One of the most famous illustrations of the modal function of toy models is Schelling’s model of segregation. Schelling’s model is concerned with whether it is possible to understand the emergence of segregated neighbourhoods without assigning explicitly racist attitudes to agents. Schelling’s model shows that, in contrast to the view that segregation is necessarily a result of racism, it is possible for segregation to arise in a population of agents following the 30% rule (that is, even if the agents would actually prefer to live in non-segregated cities). If the goal is to explain a modal phenomenon, then how-possibly understanding (and explanation) is an appropriate tool for achieving this goal. Heuristic function: How-possibly understanding via autonomous toy models is not always an end in itself. How-possibly understanding often plays a heuristic role in the process of constructing less idealized (and often, but not necessarily, also more complex) models that latch onto the target system more accurately than the original toy model (Hartmann [1995]).20 For instance, the DY-model has inspired the construction the CCM-model. The latter model includes a ‘de-idealization’ of the idealized assumption (in the DY-model) that the agents exchange all of their money when interacting. Unlike the DY-model, the CCM-model assigns a saving propensity to all agents—that is, the agents exchange only a fraction of their money when interacting, as in real economic exchanges. This small alteration comes with a considerable pay-off: the CCM-model captures the relevant data about income distributions more accurately than the original DY-model (Thébault et al. [forthcoming]). Hence, the DY-model—an autonomous toy model—plays a heuristic role in developing a more accurate model (the CCM-model). Pedagogical function: The how-possibly character of autonomous toy models is often also used for primarily illustrative purposes in science education (Hangleiter [2014]). These models enable students and researchers to quickly grasp the idea behind the solution to a problem, or the description of a phenomenon. Generally speaking, the pedagogical function of toy models is to enable students to learn how to calculate with and how to use a particular model (or theory). Once students have learned how to calculate by practicing with a toy model, the training is put to different uses in the case of embedded toy models and of autonomous toy models. Regarding embedded toy models, science students learn how to calculate with the embedding framework theory (by practicing with toy models initially), in order to prepare the students to handle mathematically less idealized (and sometimes also more complex) models of the embedding framework theory later on. Regarding autonomous toy models, the goal of practicing with a toy model is different: the acquired ability to handle an autonomous toy model mathematically enables students to make use of the toy model in a modal or a heuristic function. To sum up, we have argued that some central examples of autonomous toy models yield how-possibly understanding (as opposed to how-actually understanding). Moreover, we claimed that scientists value how-possibly understanding because it has a modal and a heuristic function in scientific research, and a pedagogical function in science education. 5 Conclusion Initially, we characterized toy models as idealized and simple models of natural and social phenomena. Our main question in this article was whether the epistemic goal of constructing toy models is to obtain understanding of their target phenomena. To support the claim that toy models do indeed provide understanding, we have argued in three steps: First, we introduced and illustrated a distinction between embedded and autonomous toy models. Second, we argued that the refined simple view is a suitable account of understanding in the context of toy models. One key feature of this account of understanding is that it allows for a distinction between how-actually and how-possibly understanding. Finally, we applied the refined simple view to our examples of embedded and autonomous toy models with the following results: (i) An embedded toy model yields how-actually understanding, if certain conditions regarding the veridical interpretation and justification of the model’s idealizations are satisfied. (ii) McMullin’s strategy, minimalism, and dispositionalism do not support the claim that all autonomous toy models provide how-actually understanding. (iii) Some autonomous toy models are best interpreted as providing how-possibly understanding. Therefore, the claim that toy models yield understanding can be vindicated if one allows for two modalities of understanding, namely, how-actually understanding and how-possibly understanding. Acknowledgements We would like to thank Seamus Bradley, Adam Caulton, Henk de Regt, Rainer Hegselmann, Maria Kronfeldner, Sebastian Lutz, Margaret Morrison, John Norton, Michael Strevens, Karim Thébault, David Weberman, and the participants of the workshop ‘Just Playing: Toy Models in the Sciences’ (held in Munich, 2015), as well as audiences in Budapest, Cardiff, Munich, Paris, Santiago de Chile, Seattle, and Tübingen for their supportive and constructive comments. We wish to stress that the two anonymous referees for this journal did a wonderful job; we are grateful for their careful and well-informed critical comments that truly helped to improve the article. We would also like to acknowledge the financial support of the Alexander von Humboldt foundation, the Münchner Universitätsgesellschaft, and the Studienstiftung des deutschen Volkes. Footnotes 1 But there are exceptions such as (Cartwright [1983], Chapter 8). 2 Hartmann ([1999]) contrasts toy models with complex models that are solved with computer simulations. He considers the latter to be ‘black boxes’ for the individual scientist. Similarly, Humphreys ([2004]) highlights a contrast between simple models and the epistemic opacity of computer simulations. 3 One also finds numerous material toy models in chemistry and in the life sciences, such as croquet ball models of molecules, Watson and Crick’s metal model of DNA, and simple model organisms (Meinel [2004]). In this article, we restrict the focus to toy models qua mathematical models of phenomena. A comparative analysis of mathematical and material toy models is, unfortunately, beyond the scope of this article. 4 Although we rely on model theory to explicate the notion of a ‘model of a theory’, our notion of embedding does not coincide with the model-theoretic notion of embedding, as, for instance, Bell and Slomson ([1974], p. 73) define it. 5 Models involving both Aristotelian and Galilean idealizations are sometimes called ‘caricature models’ in the literature; see (Frigg and Hartmann [2012], Section 1.1) for further references. 6 Thermodynamical models of phase transitions and of certain universal aspects of these phase transitions are controversial cases. Whether these models are embedded models depends on whether one believes that the thermodynamical description can be reduced to statistical mechanics. This is a controversial issue we cannot address in this article; see (Batterman [2002]; Butterfield [2011]; Norton [2012]). 7 For a detailed discussion of this point, see (Thébault et al. [forthcoming]). 8 However, not everyone accepts that explanation is a necessary condition for understanding; see, for instance (Lipton [2009]; Gijsbers [2013]). 9 The veridicality condition is logically independent from the Kuhnian criteria. The veridicality condition and the Kuhnian criteria perform different roles in the philosophy of explanation (and understanding). On the one hand, according to many standard accounts of explanation, the veridicality condition is a necessary condition for distinguishing ‘explanatory from non-explanatory’ assumptions. On the other hand, the Kuhnian criteria allow us to draw a different distinction: they enable us to discriminate ‘how good’ different bodies of explanatory information are. In this article, we will restrict our attention to the former issue. Moreover, note that not everyone accepts the veridicality condition as a requirement for a theory of explanation (Cartwright [1983], Chapter 8). 10 Strevens ([2013], p. 512) uses to notion of ‘correctness’ to refer to the veridicality condition. 11 Certain kinds of statistical explanations require true statements about a probability distribution over initial conditions (Strevens [2008], Chapters 9 and 10). 12 See (Giere [1988]; Bailer-Jones [1997], Chapter 5; Hartmann [1999]) regarding the philosophical reflection of mental models in science. Bailer-Jones ([1997], Chapter 5) provides numerous references to cognitive science research on mental models. 13 Our naturalist stance towards grasping need not necessarily be at odds with De Regt and Dieks’s notion of intelligibility. Sometimes, but not necessarily always, grasping may very well consist in being able to draw ‘qualitative consequences’ from a model, as De Regt and Dieks claim. If that is what cognitive science confirms, we have no trouble accepting it, from a naturalist point of view. 14 We are sympathetic to broadly counterfactual accounts of explanation, such as Woodward’s ([2003]), Saatsi and Pexton’s ([2013]), and Reutlinger’s ([2016]]). 15 See (Batterman [2002]; Pincock [2012]; Batterman and Rice [2014]) for other assessments of minimalism. 16 Although Mäki ([1992], [2011]) agrees with Cartwright and Hüttemann that many idealizations are ‘isolations’, his approach significantly diverges from dispositionalism. Mäki advocates a ‘functional decomposition approach’ to scientific models, which comprises not only his view of idealizations, but also a pragmatically constrained theory of representation and a sophisticated theory of truth. In this article, we focus solely on the dispositionalists. A discussion of Mäki’s approach—that might well be an alternative to both minimalism and dispositionalism—will be a task for future work. 17 As a referee pointed out, one promising strategy for defending a minimalist approach to the Schelling model might consist in exploiting the robustness of the model (Muldoon et al. [2012]). See (Reutlinger and Andersen [2016]) for a critical discussion of a particular kind of robustness approach in the context of explanations. 18 See (Strevens [2003], Chapters 4 and 5) for potential candidates of such models from the life and social sciences. 19 See (Forber [2010]; Cuffaro [2015]; Fumagalli [2016]) for a sophisticated discussion of various readings of how-possibly explanations. 20 Fumagalli ([2016], Section 4.3) defends the claim that (autonomous) toy models can play a heuristic role for constructing how-actually models only if modellers include veridical ‘additional information or presuppositions’ concerning the target system. References Bailer-Jones D. [ 1997]: ‘Scientific Models: A Cognitive Approach with an Application in Astrophysics, Ph.D. Thesis, University of Cambridge. Bailer-Jones D. [ 2009]: Scientific Models in Philosophy of Science , Pittsburgh: University of Pittsburgh Press. Batterman R. [ 2002]: The Devil in the Details , New York: Oxford University Press. Batterman R., Rice C. [ 2014]: ‘Minimal Model Explanation’, Philosophy of Science , 81, pp. 349– 76. Google Scholar CrossRef Search ADS Bell J. L., Slomson A. B. [ 1974]: Models and Ultraproducts: An Introduction , New York: Dover Publications. Butterfield J. [ 2011]: ‘Less Is Different: Emergence and Reduction Reconciled’, Foundations of Physics , 41, pp. 1065– 135. Google Scholar CrossRef Search ADS Cartwright N. [ 1983]: How the Laws of Physics Lie , Oxford: Oxford University Press. Cartwright N. [ 1989]: Nature’s Capacities and Their Measurement , Oxford: Clarendon Press. Chang C. C., Keisler H. J. [ 1990]: Model Theory , Amsterdam: North Holland. Cuffaro M. [ 2015]: ‘How-Possibly Explanations in (Quantum) Computer Science’, Philosophy of Science , 82, pp. 737– 48. Google Scholar CrossRef Search ADS De Regt H. [ 2009]: ‘Understanding and Scientific Explanation’, in De Regt H., Leonelli S., Eigner K. (eds), Scientific Understanding: Philosophical Perspectives , Pittsburgh: University of Pittsburgh Press, pp. 21– 42. De Regt H., Dieks D. [ 2005]: ‘A Contextual Approach to Scientific Understanding’, Synthese , 144, pp. 137– 70. Google Scholar CrossRef Search ADS De Regt H., Leonelli S., Eigner K. [ 2009]: Scientific Understanding: Philosophical Perspectives , Pittsburgh: University of Pittsburgh Press. Dizadji-Bahmani F., Frigg R., Hartmann S. [ 2010]: ‘Who’s Afraid of Nagelian Reduction?’, Erkenntnis , 73, pp. 393– 412. Google Scholar CrossRef Search ADS Drǎgulescu A., Yakovenko V. [ 2000]: ‘Statistical Mechanics of Money’, The European Physics Journal B , 17, pp. 723– 9. Google Scholar CrossRef Search ADS Forber P. [ 2010]: ‘Confirmation and Explaining How Possible’, Studies in History and Philosophy of Biological and Biomedical Sciences , 41, pp. 32– 40. Google Scholar CrossRef Search ADS PubMed Friedman M. [ 1974]: ‘Explanation and Scientific Understanding’, Journal of Philosophy , 71, pp. 5– 19. Google Scholar CrossRef Search ADS Frigg R., Hartmann S. [ 2012]: ‘Models in Science’, in Zalta E. N. (ed.), The Stanford Encyclopedia of Philosophy , available at < plato.stanford.edu/archives/fall2012/entries/models-science/>. Fumagalli R. [ 2016]: ‘Why We Cannot Learn from Minimal Models’, Erkenntnis , 81, pp. 433– 55. Google Scholar CrossRef Search ADS Giere R. [ 1988]: Explaining Science: A Cognitive Approach , Chicago: University of Chicago Press. Gijsbers V. [ 2013]: ‘Understanding, Explanation, and Unification’, Studies in History and Philosophy of Science Part A , 44, pp. 516– 22. Google Scholar CrossRef Search ADS Grüne-Yanoff T. [ 2009]: ‘Learning from Minimal Economic Models’, Erkenntnis , 70, pp. 81– 99. Google Scholar CrossRef Search ADS Grüne-Yanoff T. [ 2013]: ‘Appraising Non-representational Models’, Philosophy of Science , 80, pp. 850– 61. Google Scholar CrossRef Search ADS Hangleiter D. [ 2014]: ‘When Scientists Play: How Toy Models in Science Help Us Understand the World’, Bachelor Thesis, LMU Munich. Hartmann S. [ 1995]: ‘Model as a Tool for Theory Construction: Some Strategies of Preliminary Physics’, in Herfel W., Krajewski W., Niiniluoto I., Wójcicki R. (ed.), Theories and Models in Scientific Processes , Amsterdam: Rodopi, pp. 49– 67. Hartmann S. [ 1998]: ‘Idealization in Quantum Field Theory’, in Shanks N. (ed.), Idealization in Contemporary Physics , Amsterdam: Rodopi, pp. 99– 122. Hartmann S. [ 1999]: ‘Models and Stories in Hadron Physics?’, in Morgan M., Morrison M. (eds), Models as Mediators , Cambridge: Cambridge University Press, pp. 326– 46. Hartmann S. [ 2001]: ‘Effective Field Theories, Reduction, and Scientific Explanation’, Studies in History and Philosophy of Modern Physics , 32, pp. 267– 304. Google Scholar CrossRef Search ADS Hedström P. [ 2005]: Dissecting the Social , Cambridge, UK: Cambridge University Press. Hempel C. [ 1965]: Aspects of Scientific Explanation and Other Essays in the Philosophy of Science , New York: Free Press. Humphreys P. [ 2004]: Extending Ourselves , New York: Oxford University Press. Hüttemann A. [ 2004]: What’s Wrong with Microphysicalism? London: Routledge. Hüttemann A. [ 2014]: ‘Ceteris Paribus Laws in Physics’, Erkenntnis , 79, pp. 1715– 28. Google Scholar CrossRef Search ADS Kitcher P. [ 1981]: ‘Explanatory Unification’, Philosophy of Science , 48, pp. 507– 31. Google Scholar CrossRef Search ADS Kuhn T. [ 1977]: ‘Objectivity, Value Judgment, and Theory Choice’, in his The Essential Tension , Chicago: University of Chicago Press, pp. 320– 39. Lipton P. [ 2009]: ‘Understanding without Explanation’, in De Regt H., Leonelli S., Eigner K. (eds), Scientific Understanding. Philosophical Perspectives , Pittsburgh: University of Pittsburgh Press, pp. 43– 63. Mäki U. [ 1992]: ‘On the Method of Isolation in Economics’, Poznan Studies in the Philosophy of the Sciences and the Humanities , 26, pp. 319– 54. Mäki U. [ 2011]: ‘Models and the Locus of their Truth’, Synthese , 180, pp. 47– 63. Google Scholar CrossRef Search ADS McMullin E. [ 1985]: ‘Galilean Idealizations’, Studies in the History and Philosophy of Science A , 16, pp. 247– 73. Google Scholar CrossRef Search ADS Meinel C. [ 2004]: ‘Molecules and Croquet Balls’, in de Chadarevian S., Hopwood N. (eds), Models: The Third Dimension of Science , Stanford: Stanford University Press, pp. 242– 75. Morgan M., Morrison M. (eds) [ 1999]: Models as Mediators , Cambridge: Cambridge University Press. Muldoon R., Smith T., Weisberg M. [ 2012]: ‘Segregation that No One Seeks’, Philosophy of Science , 79, pp. 38– 62. Google Scholar CrossRef Search ADS Norton J. [ 2012]: ‘Approximation and Idealization: Why the Difference Matters?’, Philosophy of Science , 79, pp. 207– 32. Google Scholar CrossRef Search ADS Pincock C. [ 2012]: Mathematical and Scientific Representation , New York: Oxford University Press. Reutlinger A. [2016]: ‘Is There a Monist Theory of Causal and Non-causal Explanations? The Counterfactual Theory of Scientific Explanation’, Philosophy of Science , 83, pp. 733– 45. CrossRef Search ADS Reutlinger A., Andersen H. [ 2016]: ‘Abstract versus Causal Explanations?’, International Studies in the Philosophy of Science , 30, pp. 129– 49. Google Scholar CrossRef Search ADS Saatsi J., Pexton M. [ 2013]: ‘Reassessing Woodward’s Account of Explanation: Regularities, Counterfactuals, and Non-causal Explanations’, Philosophy of Science , 80, pp. 613– 24. Google Scholar CrossRef Search ADS Schelling T. [ 1971]: ‘Dynamic Models of Segregation’, Journal of Mathematical Sociology , 1, pp. 143– 86. Google Scholar CrossRef Search ADS Sober E. [ 1984]: The Nature of Selection , Chicago: The University of Chicago Press. Strevens M. [ 2003]: Bigger than Chaos , Cambridge, MA: Harvard University Press. Strevens M. [ 2008]: Depth , Cambridge, MA: Harvard University Press. Strevens M. [ 2013]: ‘No Explanation without Understanding’, Studies in History and Philosophy of Science A , 44, pp. 510– 15. Google Scholar CrossRef Search ADS Sugden R. [ 2000]: ‘Credible Worlds: The Status of Theoretical Models in Economics’, Journal of Economic Methodology , 7, pp. 1– 31. Google Scholar CrossRef Search ADS Thébault K., Bradley S., Reutlinger A. [forthcoming]: ‘Modeling Inequality’, British Journal for Philosophy of Science . Trout J. [ 2002]: ‘Scientific Explanation and the Sense of Understanding’, Philosophy of Science , 69, pp. 212– 33. Google Scholar CrossRef Search ADS Van Fraassen B. [ 1980]: The Scientific Image , Oxford: Oxford University Press. Weisberg M. [ 2013]: Simulation and Similarity , New York: Oxford University Press. Woodward J. [ 2003]: Making Things Happen , New York: Oxford University Press. © The Author 2017. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com

The British Journal for the Philosophy of Science – Oxford University Press

**Published: ** Jul 10, 2017

Loading...

personal research library

It’s your single place to instantly

**discover** and **read** the research

that matters to you.

Enjoy **affordable access** to

over 18 million articles from more than

**15,000 peer-reviewed journals**.

All for just $49/month

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Read from thousands of the leading scholarly journals from *SpringerNature*, *Elsevier*, *Wiley-Blackwell*, *Oxford University Press* and more.

All the latest content is available, no embargo periods.

## “Hi guys, I cannot tell you how much I love this resource. Incredible. I really believe you've hit the nail on the head with this site in regards to solving the research-purchase issue.”

Daniel C.

## “Whoa! It’s like Spotify but for academic articles.”

@Phil_Robichaud

## “I must say, @deepdyve is a fabulous solution to the independent researcher's problem of #access to #information.”

@deepthiw

## “My last article couldn't be possible without the platform @deepdyve that makes journal papers cheaper.”

@JoseServera

DeepDyve ## Freelancer | DeepDyve ## Pro | |
---|---|---|

Price | FREE | $49/month |

Save searches from | ||

Create lists to | ||

Export lists, citations | ||

Read DeepDyve articles | Abstract access only | Unlimited access to over |

20 pages / month | ||

PDF Discount | 20% off | |

Read and print from thousands of top scholarly journals.

System error. Please try again!

or

By signing up, you agree to DeepDyve’s Terms of Service and Privacy Policy.

Already have an account? Log in

Bookmark this article. You can see your Bookmarks on your DeepDyve Library.

To save an article, **log in** first, or **sign up** for a DeepDyve account if you don’t already have one.