Cognitive Recycling

Cognitive Recycling Abstract Theories in cognitive science, and especially cognitive neuroscience, often claim that parts of cognitive systems are reused for different cognitive functions. Philosophical analysis of this concept, however, is rare. Here, I first provide a set of criteria for an analysis of reuse, and then I analyse reuse in terms of the functions of subsystems. I also discuss how cognitive systems execute cognitive functions, the relation between learning and reuse, and how to differentiate reuse from related concepts like multi-use, redundancy, and duplication of parts. Finally, I illustrate how my account captures the reuse of dynamical subsystems as unveiled by recent research in cognitive neurobiology. This recent research suggests two different evolutionary justifications for reuse claims. 1 Introduction 2 Criteria for Analyses of Reuse 3 Use and Reuse 4 The Reuse of Dynamical Systems in Cognitive Systems 5 Conclusion 1 Introduction Theories in cognitive neuroscience often claim that parts of the brain are used over and over for different cognitive functions. This can be generalized as a principle of reuse for different cognitive subsystems (using ‘subsystems’ as a generic term, ranging over mechanisms, components, areas, and so on). Such reuse claims occur in many different accounts of cognition (Wolpert et al. [2003]; Dehaene [2005]; Gallese and Lakoff [2005]; Hurley [2005]; Gallese [2008]), though philosophical discussion of the concept of reuse is rare (Anderson [2007a], [2007b], [2007c], [2010], [2014]). Herein, I analyse reuse in terms of subsystems of cognitive systems being used to execute different cognitive functions. Reuse appears in a range of theories of cognitive function, yet an analysis of reuse has not yet been offered. By providing an analysis of the concept, the claims of various theories can be assessed and clarified. Given the prevalence of the concept in contemporary theories of cognition and the breadth of examples of reuse, such an account is a necessary and timely addition to the discussion. In addition to the importance of analysing reuse for assessing cognitive theories, reuse describes a certain sort of regularity in cognitive systems. In particular, subsystems regularly reoccur in cognitive systems for different cognitive functions. Analysing reuse provides some insight into that type of regularity. Finally, an account of reuse provides evolutionary grounds for understanding and explaining some cognitive phenomena. Evolution is a tinkerer, reusing a subsystem for some function instead of inventing—or reinventing—one (Anderson [2010]). Describing the contribution of these subsystems to cognitive systems by describing their role in allowing possession of cognitive capacities can provide evidence for the origins of structures and thereby ground explanations of biological phenomena. In this article, I first discuss criteria for a conceptual analysis of reuse. What are the constraints on a satisfactory conceptual analysis of reuse? Missing from previous analyses of reuse is an analysis of the concept of use. Use is a necessary precondition for reuse; after I present my analysis of use of a subsystem of a cognitive system, I present my analysis of the concept of reuse. I argue that the details of my account satisfy the criteria laid out for a satisfactory conceptual analysis of reuse and briefly illustrate how applying the analysis captures a number of examples of reuse drawn from research in cognitive neuroscience. Finally, I apply this approach by formulating two specific reuse hypotheses for dynamical systems as revealed by recent research in cognitive neurobiology—the study of cognitive function from electrophysiological recordings of single neurons and neuronal populations—which reflect two distinct evolutionary justifications for reuse claims. 2 Criteria for Analyses of Reuse What are the criteria for a satisfactory analysis of reuse? I outline four distinct criteria for a satisfactory analysis: an analysis should specify what is being reused; should analyse the use and reuse of an entity; should distinguish reuse from redundancy and duplication of parts; and should be empirically adequate. Later, I will construct an analysis of reuse whose details are general enough such that a cognitive theory can use the concept and satisfy these constraints.1 The first criterion for an analysis of reuse is to specify what, precisely, is being reused. Are the mechanisms of the cognitive system being reused? Circuits? Computations? Are the organizational features, such as the architecture or infrastructure, of the system being reused? Any particular theory of cognition may have multiple objects of reuse, entailing that the physical mechanisms are reused and the representations are reused. Different objects of reuse may have different reuse conditions, the properties of the object that determine what counts as reuse. Thus, different objects of reuse may require different analyses of the concept. Reuse has heretofore been cast as the reuse of neural entities. For example, Anderson argues that ‘brain regions have fixed low-level functions (“workings”) that are put to many high-level “uses” ’ (Anderson [2010], p. 265). While the focus on neural reuse per se is commendable, capturing a concept used to describe how different neural areas are reused in cognition, a non-neurocentric analysis of reuse is desirable. Cognitive systems are often considered medium independent (Haugeland [1985]), capable of being made of different physical constituents. On some theories of cognition, brains, computers, swarms, aliens, and other complex systems may all be classified as cognitive systems. If reuse captures a cognitive phenomenon as opposed to a strictly neural one, then an analysis of reuse should remain neutral with regard to the constitutive substrate when specifying the entities being reused. The second criterion for an analysis of reuse is to analyse what it is for something to be used and then subsequently reused. Since use is related to function, such an analysis of reuse will describe the relationship between use and function (Anderson [2010]). Reuse can be tackled after analysing use. Furthermore, reuse should be distinguished from multi-use. Some entity can be used multiple times in different ways for different cognitive functions. Multi-use denotes some particular cognitive entity having such multiple uses. In order to establish the reuse of some entity, instead of the weaker multi-use, the function of whatever is being reused needs to be specified, accompanied by a demonstration that this same entity operating in the same fashion is used to accomplish different goals (Jungé and Dennett [2010]). The second criterion for an analysis of reuse is to analyse use in a way that accounts for the central role of functions in reuse and that differentiates reuse from multi-use. Third, an analysis of reuse should keep reuse distinct from redundancy and duplicate entities. Redundancy occurs when there are backup entities in a system. These backup entities are distinct from the primary entity, but they may be of the same type and perform the same function.2 However, we do not want to count these redundancies as being reused; intuitively, a redundant entity is not one that is currently being used let alone reused. Entities are duplicated when two distinct entities are incapable of performing some particular function on their own. In this case, the function is overdetermined for the system. However, again intuitively, duplicate entities over-determining an outcome should be conceptually distinct from reuse. Keeping these concepts distinct is the third criterion for an analysis of reuse. Finally, a reuse theory should be empirically adequate. Given the appearance of the concept in various theories of cognition and the putative reuse of many different cognitive entities, a philosophical analysis of reuse should strive to cover as wide a range of theories and phenomena as possible. This is not to say that these examples will always reflect true claims about use, multi-use, or reuse. An analysis of reuse may result in certain judgements that are counter-intuitive or counter-culture compared to the accepted instances of the concept. For example, some putative instances of reuse may be judged as instances of multi-use. But an analysis that results in fewer such reclassifications is, ceteris paribus, better than one that results in more. Likewise, a philosophical analysis should strive to find the concept of reuse that unifies the most instances of the unexplicated concept. Again ceteris paribus, a philosophical account that distinguishes a core concept of reuse, or fewer core concepts of reuse, is preferable to one that distinguishes multiple, or more, core concepts of reuse. In sum, an analysis of reuse should satisfy four basic criteria: First, the analysis should specify what is being reused, while respecting the potential range of physical substrates for cognitive systems. Second, the analysis should appeal to functions and differentiate reuse from multi-use. Third, the analysis should keep the concept of reuse distinct from redundancy and from duplicate entities. Fourth, the analysis should be empirically adequate, accommodating the evidence for reuse as best as possible while also unifying the uses of the concept of reuse as best as possible. In the following, I will present an analysis of the concept of reuse suitable for a range of theories of cognition, starting with an analysis of the use of a subsystem. 3 Use and Reuse Having stipulated a number of criteria for an adequate reuse theory, I will now develop an analysis of reuse general enough for a range of theories of cognition. I apply this analysis to some examples of reuse drawn from cognitive neuroscience, as well as later illustrating the analysis with examples of the reuse of dynamical systems. The key to understanding reuse is to start with an analysis of the use of an entity. Here, I focus on subsystems of cognitive systems: collections of parts such as neurons, neurotransmitters, and so forth in the case of the brain, or layers of units, connections between units, and so forth in the case of neural networks. A subsystem is some subset of the parts and processes of a system. This broad definition is meant to capture the many different ways systems can be analysed, as any collection of parts and processes of a system will be a subsystem. Of course, many subsystems are organized collections of such parts and processes, where the parts stand in particular relations to each other and are organized into processes so as to produce the products of those processes (cf. Machamer et al. [2000]); but on the broad use of the term herein, any collection of parts and processes constitutes a subsystem. A subsystem is used when it performs some function for the cognitive system, where that function is in some way related to the cognitive functions of the system. Subsystem s performs some function, F, for cognitive system S, such that performing that function totally or in part allows S to perform some cognitive function, C. In order to analyse use, then, a brief digression on cognitive functions is required.3 Cognitive systems, such as some organisms, are faced with a range of behavioural challenges that reflect the particular exigencies they face in navigating and manipulating the world. Different organisms face different challenges, and the particular challenges faced by an organism reflect the context within which the organism is acting. Perception, decision-making, and other cognitive functions require the organism to keep track of objects in the environment, the various rewards and uncertainties associated with actions, and so forth. An animal in the forest will face the need to detect predators occluded by foliage, whereas an animal in the desert will face different predatory dangers. Humans have perhaps the most diverse environments to navigate. Different objects will require processing different types of properties, different rewarding contexts will place different evaluative demands on the organism, different perceptual contexts will require different attentional commitments, and so forth. Mathematical models describe the processing used to overcome these challenges. These processing models specify the variables that the system encodes, the mathematical relationships between these variables, and the way that these encoded variables are transformed in order to accomplish the goal of the organism, namely, to respond to the behavioural challenge (Marr calls such an account a ‘theory’ of the relevant cognitive domain; Marr [1982]; Shagrir [2010]). For example, consider the case of a subject making a perceptual decision when the sensory evidence is equivocal, such as choosing the direction of motion of a field of dots, some proportion of which are moving in the same direction. The subject must integrate evidence from the environment in order to make the decision. This process can be described in various mathematical ways, including using a formalism known as the drift diffusion model (DDM), which is provably optimal for this class of problems (Gold and Shadlen [2002]). The DDM starts with a prior log-odds ratio—the ratio of the prior probabilities of the dots moving left or right—and then computes the likelihood fraction for the observed motion evidence over some period of time, adding this weight of evidence to the priors to arrive at a posterior log-odds ratio. If a threshold for the posterior ratio has not yet been reached, the process iterates until a threshold is reached—for example, the system runs out of evidence (say, if the dot display disappears) or some other stopping criterion is met. Once the process halts, a decision is made according to some decision rule, based on, for example, which threshold was reached, or which direction of motion had the greater amount of evidence. Numerous other examples of processing models can be adduced. Importantly, these processing models must be interpreted. Simply providing a mathematical function with uninterpreted variables and parameters is insufficient to specify the processing model. In addition to the parameters, the variables, and their interrelations, a referential assignment for the variables and parameters in the model must be given.4 This process of referential assignment provides an interpretation of the processing model. Only once an interpretation is provided can a model be identified. Otherwise, a model would consist in an empty formalism, some equation that fails to connect with the environmental properties that must be tracked and the variables transformed for adaptive behaviour. These processing models are distinct from, though executed by, the subsystems of cognitive systems. Processing models are executed by cognitive systems in virtue of the subsystems jointly executing the processing model, where a cognitive subsystem executes a part of a processing model only if there is a mapping between some subset of the elements of the model and the elements of the mathematical description of the subsystem.5 The subsystem is described mathematically such that there are distinct states for distinct inputs of the processing model, distinct states for distinct outputs of the processing model, and a counterfactual mapping such that the sequence of states are arranged to preserve the input–output mapping that characterizes the processing model. If the subsystem is equivalent to the processing model, then there is such a mapping. In other words, the subsystem must satisfy a mapping for it to play a role in executing the processing model.6 Importantly, however, satisfying such a mapping, though necessary, is not sufficient to establish that a subsystem executes a processing model. There are at least two types of equivalence: strong and weak.7 Weak equivalence is a mapping of the inputs and outputs of the processing model on to the description of the cognitive subsystem. Strong equivalence occurs when the input, output, and mathematical relations between the inputs and outputs are mapped on to the subsystem.8 In cases of strong equivalence, not only are there distinct states for the inputs and outputs, but in addition, the way that the subsystem transforms the input into the output can be described using the very same mathematical relations that occur in the processing model. Assuming cognitive systems do execute processing models, strong equivalence is too strong a requirement. The mathematical functions describing the subsystems need not correspond to the functions that are present in the processing model. Consider the example of noisy perceptual decisions discussed earlier. Research into the neural mechanisms of such perceptual decision-making in the lateral intraparietal area (LIP)—a posterior region of the brain that helps control eye movements—reveals the presence of a dynamical pattern of neural activity involving integration of a neural signal to a boundary (Roitman and Shadlen [2002]; Gold and Shadlen [2007]). Integrative activity such as is seen in LIP during noisy perceptual decisions can be described using different mathematical functions, such as integration, exponentiation (a simple first-order differential equation), linear summation, or even using a prototypical ‘squashing’ function (Usher and McClelland [2001]; Wang [2002]; Mazurek et al. [2003]; Churchland [2012]; Eliasmith [2013]). None of these functions are precisely the same mathematical function as the sequential probability ratios calculated in the DDM, though they all share certain properties, such as the shape of the function, that capture some aspect of the integration process described by the model. In each such description, the DDM maps on to the subsystem, even though the mathematical descriptions utilize functions distinct from the ones in the model. A natural rejoinder notes that this mapping does not deductively imply that weak equivalence is necessary for executing a cognitive model. However, the inference is not a deduction about the necessity of weak equivalence. The grounds for the inference are inductive, based on the description of the subsystem, the processing model mapping on to the subsystem, and the explanatory power of the weak equivalence relation. Thus, strong equivalence is not required for a subsystem to execute a processing model. If the processing model is executed, then the model’s input–output sequence maps on to the subsystem states—that is, the processing model is weakly equivalent to the subsystem. Weak equivalence imposes a counterfactual constraint on cognitive function execution as well. Not only must it be the case that the processing model for the cognitive function is weakly equivalent to the subsystem, it must also be the case that if the inputs to the model were different, such that the processing model dictates a different set of values for the variables, then the subsystem would also be in a different set of states corresponding to those different values.9 Any number of different processing models will be weakly equivalent to the subsystems of cognitive systems; enforcing the counterfactual constraint restricts the range of alternative models weakly equivalent to the subsystems. In sum, a subsystem executes a model of a cognitive function only if there is a weak equivalence between the processing model and the subsystem executing the cognitive function. What else is needed to guarantee model execution by a subsystem? Weak equivalence is a necessary but not sufficient condition. Deeper discussion of this critical question, important not just for cognitive neuroscience but also for computer science and for a number of philosophical puzzles such as rule-following, is outside the purview of this article. However, we can add one further constraint to the necessary conditions for model execution, meant to guarantee that whatever further requirements arise in a mature analysis of the execution relation, these requirements will not conflict with the analysis of reuse provided below. This constraint is that whatever states of the subsystem are picked out as being weakly equivalent to the model, those states are not restricted to being weakly equivalent only to that model. In other words, there must not be a uniqueness requirement, such that the subsystem’s states are permitted to be weakly equivalent to only one processing model.10 Having provided a sketch of what a cognitive function is and a necessary requirement on the systems that execute them, an analysis of reuse can now be formulated. Focusing on subsystems as the relevant entity, we can now analyse use as follows: U: System s is used in cognitive system S if s is a subsystem of S and s performs function F in S such that s’s performing F in S executes some processing model, M.11 Critically, note that the variable binding M in U is existential, not universal. U states that a system is used in a cognitive system if, in addition to being a subsystem of the system, the subsystem’s function or functions (in part) execute some processing model. Earlier, a necessary condition for model execution was weak equivalence: a mapping between the inputs and outputs of the model and the states of the executing system. To say that s’s performing F in S helps to execute the model is to say that ascribing function F to s in S denotes certain states of s to which (parts of) the processing model is (are) weakly equivalent. Not every state of s is relevant to executing some processing model; ascribing function F picks out the relevant states. This analysis of use clearly ties s’s functions to processing model M executed in part by s’s function. Using ‘use’ to pick out use in the sense of U, reuse is then: R: A subsystem, s, of S is reused if s is used for some processing model, M, s is also used for some processing model, M′, function F for M = function F′ for M′, and processing model M ≠ processing model M′. For reuse, function F that s performs for M must be the same function F that s performs for M′, while processing models M and M′ executed by s must not be identical. This flexibility makes the notion of reuse so appealing: the same type of thing performing the same function for some system is utilized to execute different processing models—that is, is used for distinct cognitive functions. In addition, the requirement for two distinct processing models explains the added constraint on any further analysis of model execution, namely, that such an analysis must not restrict execution to a single formal model. To briefly illustrate reuse, consider the following two examples.12 The anterior cingulate cortex (ACC), a medial prefrontal cortical region, is implicated in numerous cognitive functions including conflict detection (MacDonald et al. [2000])—when subjects are presented with conflicting sensory cues for making a decision—as well as motivation (Devinsky et al. [1995]; Holroyd and Yeung [2012])—lesions to this area result in reduced motor activity and speech (Németh et al. [1988]) and patients exhibiting compulsions have hyperactivation in the region (Maia et al. [2008]). ACC is hypothesized to execute executive functions related to cognitive control (for example, Paus [2001]; Shenhav et al. [2013]), capturing the region’s role in many different cognitive functions. Thus, the ACC (s) executes executive control (F) for processing conflict (M) and for motivation (M′). The superior colliculus (SC) is a structure in the midbrain that controls eye movements (Sparks [1986]; Lee et al. [1988]) as well as attention (Krauzlis et al. [2013]) and contains a topographic map of visual space. In both cases, the level of activity of an SC neuron reflects the relative contribution of that neuron’s preferred location in visual space to the determination of the target of the eye movement or the focus of attention.13 Thus, a given neuron in the SC (s) increases in firing rate (F) both to determine the end point of an eye movement (M) and to determine the focus of attention (M′). The above analyses of use and reuse are explicitly functional, but what concept of function is this? This analysis of reuse requires that the system’s function be mathematically characterized. There are many extant approaches to function, such as causal role functions (Cummins [1975]) or selected effects functions (Neander [1991]). Prima facie, this requirement can be satisfied by causal role, selected effects, or some other notion of function. The approach to reuse presented here is neutral between these different concepts. Whatever notion of function is invoked, however, the function must be susceptible to mathematical description in such a way that it picks out a set of input–output pairs on to which the processing model can be mapped. Executing a cognitive function requires a mapping between the states picked out by the subsystem’s systemic function and the processing model reflecting the system’s cognitive function.14 This approach to reuse faces a challenge from learning.15 Learning can be thought of as changes internal to a cognitive system that result in the execution of some processing model. Prior to learning, however, the processing model was not being executed. Trivially, then, every instance of learning involves the system executing a new processing model. Thus, cognitive subsystems before learning execute different models from those after learning, and so every instance of learning seems to involve an instance of reuse. But if every instance of learning involves reuse, then not only is reuse rampant in cognitive systems, reuse is seemingly trivialized. While I happen to think that reuse is rampant in cognitive systems, and that there are good reasons (including learning) for thinking that reuse is rampant, I don’t think that learning is the only reason to think that reuse is rampant. Furthermore, I don’t think every instance of learning involves reuse, and so I don’t think reuse is trivial. Note that at least two distinct types of learning relevant for this discussion: Learning could involve a shift from disorganized behaviour (where the processing model, insofar as there is such, results in random behaviour) to organized behaviour. Call this ‘model acquisition learning’. Learning could also involve a shift from organized behaviour to organized behaviour (where the two processing models are not identical). Call this model ‘revision learning’. The same systemic function is not reused in every instance of learning. In model acquisition learning, agents shift from disorganized to organized behaviour. The resulting organized behaviour could result from the execution of a processing model used in a different domain for the new domain. In executing the model in the new domain, the same systemic functions used in previous executions may be used in the new execution of the model. But the functions used in executing the new model may be distinct from whatever functions were used prior to learning to generate the disorganized behaviour. Thus, the processing models are distinct, but so too may be the underlying functions used in executing that processing model. Since R requires the subsystems to perform the same function, cases where the underlying functions are distinct would not count as an instance of reuse. Alternatively, the agent could be acquiring a new processing model instead of importing one from a different domain. The new model may rely on systemic functions that were not used prior to learning. Every cognitive system possesses a wealth of potential systemic functions, not every one of which is used for a cognitive function. For example, the same state of the physical mechanism may contribute different subsets of its properties to the subsystems that execute the cognitive function. On my view, nothing about the system per se needs to change to adapt the systemic functions to the processing problem. Functional differences can arise from different states of the very same subsystem aiding in the execution of some processing model.16 Again, distinct functions for the subsystem prevent such examples as counting as instances of reuse. In reply, the objector may argue that though distinct systemic functions may be used in model acquisition learning, those functions are all possessed by the system. Since they are all possessed by the system, the objector infers that from the system’s perspective, reuse is occurring. First, this reply may not be correct: systems may be able to acquire entirely novel systemic functions (for example, in the case of the brain, through neuronal development or plasticity). Second, this reply conflicts with the analyses of use and reuse above. Recall that for use in the sense of U, if a subsystem in the system is used, it must help in the execution of a processing model for some cognitive function. Subsystems may possess latent systemic functions that are not being utilized in model execution. Absent such a role in model execution, these subsystems are not used. Suppose as a result of some learning process, an unused subsystem is recruited to aid in the execution of a processing model. Then, one of its systemic functions will be used. But before this point, they are not; use is a prerequisite for reuse, and so reuse cannot occur. In model revision learning, the system revises the model being executed, such that before learning, one model was executed, and after learning, a distinct model is executed. (A simple example of this may be arithmetic: There are numerous ways to multiply and divide numbers, all of which provide the same results. A system may start with one method, and then come to learn another.) This too need not result in reuse. While some set of subsystems will execute each model, both before and after learning, there is no requirement that these sets share members (or overlap at all). Furthermore, there is no requirement that the functions used in executing the learned models be drawn from the set of functions used to execute some model or other. The functioning of the system may be such that any number of used and unused functions may exist in the system, and these functions serve as a storehouse to be called upon to execute processing models. Without the same function for the subsystem, such cases do not count as instances of reuse. Thus, in sum, neither model acquisition learning nor model revision learning entails reuse, and so a fortiori nothing about learning per se trivializes reuse. Though, again, I believe that learning may often result from reuse, and I believe that reuse is rampant in cognitive systems. This analysis of reuse allows us to differentiate reuse from multi-use. Use occurs when a subsystem’s function results in the subsystem helping to execute a formal cognitive model. Reuse occurs when the subsystem performs the same function but helps to execute different processing models. Multi-use, in contrast, occurs when some other subset of a subsystem’s states is used for executing some other processing model. In that case, the subsystem is the same, even though different aspects of it execute the processing model. Thus, subsystems can be multi-used on this analysis of reuse. If a subsystem performs different functions for executing the same or different processing model, then it is multi-used: MU: The use of subsystem s is an instance of multi-use if s is used at least twice, and for at least two instances of use of s, function F1 for one use and function F2 for another use are such that F1 ≠ F2. We can now see how multi-use and reuse are distinct for subsystems. Both turn on the subsystem aiding in the execution of a processing model by performing some function. Precisely which function is performed matters for distinguishing multi-use from reuse. In multi-use, different functions are performed by the subsystem, regardless of whether or not the subsystem is used for executing the same or different processing models. So long as different functions are performed by the subsystem, then the subsystem is being multi-used sensu MU. In reuse, the subsystem is performing the same function but executing different processing models.17 The analysis of reuse R satisfies some of the criteria for adequacy for an analysis of reuse adumbrated above. Recall that the first criterion for adequacy required that an analysis specify what precisely is being reused. In the analyses of use and reuse above, subsystems are being reused. Framing the analysis in terms of subsystems was deliberate, leaving room for more specific entities. Furthermore, these subsystems need not be neural systems, permitting non-neural substrates for cognitive systems. Thus, reuse in the sense outlined above somewhat satisfies the first criterion, while providing flexibility for different types of subsystems. Recall that the second criterion for adequacy required that use should be analysed in terms of function and be conceptually distinct from multi-use. Use and reuse were analysed in terms of functions executing processing models. And, as just discussed, multi-use and reuse are analysed differently on the present theory. Thus the present analysis satisfies the second criterion. The third criterion required that the concept of reuse should be distinct from redundancy and duplication. Redundancy occurs when a back-up system waits to take over should the original malfunction. Duplication occurs when a system consists of two or more duplicate entities that perform the same function; these duplicate entities would seem to be instances of reuse for each other. Both concepts are distinct from reuse as analysed here. The duplicate entities explicitly perform the same function for executing the same model. On R, this is no longer reuse. And redundant entities likewise perform the same function for executing the same model, and so are no longer classified as instances of reuse on R. Both objections arise from considering different kinds of over-determination of model execution. In the case of duplicate entities, the functions may be performed concurrently; whereas in the case of redundant entities, the functions are performed successively. For both, the same processing models are being executed, and this sameness of processing model violates the analysis of reuse. Finally, the fourth criterion requires empirical adequacy for an analysis of reuse. Empirical adequacy must here be understood relative to what is putatively reused. In my analysis of use and reuse, the subsystems of cognitive systems are the target, leaving room for more specific entities to be reused so long as they are consistent with the definition of a subsystem provided earlier. Next I will argue that case studies from cognitive neurobiology support the reuse of dynamical systems, dynamical subsystems of cognitive systems. 4 The Reuse of Dynamical Systems in Cognitive Systems Having laid out and motivated a general approach to use and reuse, I will now argue that recent research in cognitive neurobiology has revealed how dynamical subsystems of cognitive systems (henceforth, dynamical systems) are reused for different cognitive functions. Dynamical systems are sets of dynamical properties, properties of physical systems that change either over time or with respect to each other. For example, a basic property of a neuron is the rate at which it discharges action potentials, that is, the propagation of electrical signals. This firing rate is a dynamical property of a neuron. Described using the mathematical tools of dynamical systems theory, these systems are characterized mainly by functional properties that abstract away from the details of the physical mechanisms that implement the dynamics. Applying the analysis of reuse developed above, the dynamical systems reuse hypothesis is: D: Cognitive systems reuse the same dynamical systems to execute processing models. D states that dynamical systems are the relevant type of entity being reused. There are two general types of reuse as applied to dynamical systems, depending on whether the physical mechanisms implementing the dynamical system are the same or different. In both cases, dynamical systems execute different processing models, in line with D and R. The first type of reuse involves dynamical systems implemented by the same physical mechanism. Recall that a dynamical system is identified as a subset of the dynamical properties of the physical mechanism implementing it. Each instance of the physical mechanism that implements the dynamical system will possess the same suite of dynamical properties with which the dynamical system is identical. Hence, different instances of the same physical mechanism can yield different implementations of the same dynamical system. A clear example of a reused dynamical system is the integrate-to-bound system. To illustrate the role of the integrate-to-bound system, recall the research programme, discussed above, exploring how animals make perceptual discriminations in noisy conditions. This research programme investigates the perceptual and decision processes for determining the direction of motion of a field of dots using the random dot motion task (RDMT).18 In a typical experiment, monkeys are presented with a visual display of moving dots, only some fraction of which move in a particular direction (‘motion strength’ in Figure 1), for example, either left or right. The noisiness in the signal is due to the fraction of dots that are not moving in the same direction (that is, are not moving coherently), and different fractions of dots move coherently on different trials. Recall that the processing model for this task, the DDM, involves the integration of evidence from a starting point—the prior probabilities of the direction of motion—until some stopping point is reached, such as a decision threshold or the cessation of the stimulus. Figure 1. View largeDownload slide Average recorded firing rates from cells in the parietal cortex of monkeys while they make a perceptual decision in different evidential conditions. Adapted from (Roitman and Shadlen [2002], p. 9482). Figure 1. View largeDownload slide Average recorded firing rates from cells in the parietal cortex of monkeys while they make a perceptual decision in different evidential conditions. Adapted from (Roitman and Shadlen [2002], p. 9482). The LIP, an eye movement control region in the posterior parietal cortex, contains cells that exhibit ramping-up activity preceding a decision. This recorded electrical activity from LIP possesses three relevant dynamical properties (Figure 1). First, these cells have a ‘reset’ point or baseline firing rate just after motion onset (time 0 in the plot on the left in Figure 1). Second, the activity of cells in LIP varies with the motion strength of the stimulus, with steeper increases in firing rate for stronger evidence (the different coloured lines in both plots). Third, the activity of these cells converges on a common firing rate across different motion conditions just prior to the animal initiating a saccade (time 0 in the plot on the right). This pattern of activation partly constitutes an integrate-to-bound dynamical system: a baseline or starting point, a period of integration, and a threshold or boundary. This pattern of dynamics maps on to the DDM processing model, as alluded to above. The DDM consists in an evidence sampling and summation process that starts with a prior log-odds ratio, samples the evidence to form a likelihood ratio, and then adds this to the prior to obtain a posterior log-odds ratio. This summation proceeds until some stopping criterion, such as a threshold, is reached. First, the DDM starts with a prior log-odds ratio that is the same across different strengths of evidence, just as LIP cells exhibit a baseline and reset at the beginning of presentation of the motion stimulus. Second, the sequential evidence sampling process in the DDM is mapped on to the differential increases in activity in LIP cells encoding the strength of the motion stimulus. Finally, third, the DDM evidence sampling process halts when a threshold is reached, the same across different strengths of evidence, similar to how LIP cells rise to a common value across different motion conditions. Thus, the DDM maps on to the integrate-to-bound system in LIP. Integration of evidence and a concomitant integration function in neurons has been taken to underlie a variety of perceptual decisions, not just motion discrimination, including disparity discrimination (Uka and DeAngelis [2003]), olfactory discrimination (Uchida and Mainen [2003]; Kepecs et al. [2006]; Uchida et al. [2006]) and vibrotactile perceptual decisions (Romo and Salinas [2001]; Hernandez et al. [2002]), amongst others. The repeated instances of the integrative activity at the neuronal level suggest a similar biophysical mechanism and, as a result, the implementation of the same dynamical system. Of course, there are biophysical differences between neurons in different areas, so the physical mechanisms are similar but not identical. These similar mechanisms, though, are able to produce equivalent dynamics resulting in the implementation of the same dynamical system. This is perhaps not surprising. The problem facing the organism is similar, even if the information is coding for a different modality (sound, vision, somatosensation, and so on). This cognitive function requires the sampling and integration of evidence, albeit different kinds of evidence, over time. For each instance of a perceptual decision executed in a different modality, is the same or different processing model executed? This depends on how finely we individuate our processing models. In the discussion of the execution of processing models above, I stated that the processing model must be interpreted, otherwise the properties of the environment that the animal tracks and the cognitive function the model describes both remain ambiguous. Insofar as such an interpretation assigns different referents to the variables encoding the integration of evidence, then different functions are being executed, and the integrate-to-bound system is being reused. In the use of dot motion, the variables stand for visual motion evidence. For other sorts of sensory evidence, the variables would stand for the evidence in the respective sensory domains. However, consideration of the neuronal mechanisms suggests that these neurons may merely be integrating neuronal signals. So, perhaps a case could be made for these variables encoding ‘neuronal evidence’ of a sort. To reinforce this conclusion of reuse of the integrate-to-bound dynamical system, consider another function that is executed by LIP neurons, related to action execution. One stereotypical pattern of firing commonly seen in LIP is a ramp-up in firing prior to executing a saccade (Barash et al. [1991]). In the delayed-saccade task, the animal fixates a centrally presented target while a peripheral target flashes on the screen for a variable amount of time and then extinguishes. The animal is cued to make a saccade by the disappearance of the central fixation target (the ‘go’ cue). Subsequent to the go cue, ramp-up activity, similar to what is seen during the RDMT, is observed in LIP cells (see Figure 2). Here, following the go cue, this LIP cell exhibited a stereotypical increase in firing rate prior to the saccade. The integrative activity prior to a saccade seen in the delayed-saccade task is common for a subset of LIP cells (examples abound; see, for example, Platt and Glimcher [1997]; Gottlieb et al. [1998]; Louie et al. [2011]). Figure 2. View largeDownload slide Pre-saccadic increase in activity in an LIP cell. Adapted from (Barash et al. [1991], p. 1100). Figure 2. View largeDownload slide Pre-saccadic increase in activity in an LIP cell. Adapted from (Barash et al. [1991], p. 1100). What is the processing problem the organism faces? Fundamentally, the system needs to make a decision about when to move. In the delayed-saccade task, the system must remember the location of the target and then initiate the movement. One processing model for eye movement initiation is a rise-to-threshold model, also called the LATER (linear approach to threshold with ergodic rate) model (the model can be applied to other types of movements as well; Carpenter and Williams [1995]; Schall and Thompson [1999]; Reddi and Carpenter [2000]). The LATER model ‘postulates a decision signal S associated with a particular response. When an appropriate stimulus appears, S starts to rise linearly from an initial level S0 at a rate r; upon reaching a pre-specified threshold ST, the saccade is triggered’ (Reddi and Carpenter [2000]). The integrate-to-bound dynamics observed over many different tasks in LIP can be taken to execute the LATER model for movement initiation. The integrate-to-bound system is reused for the RDMT and the delayed-saccade task. In the RDMT, the integrate-to-bound dynamics in LIP is taken to execute the DDM. In the delayed-saccade task, the integrate-to-bound dynamics in LIP is taken to execute the LATER model. The objection levelled above, for noisy perceptual decisions in different modalities, here maintains that these are not in fact different processing models. But this is simply false. The integration period in the DDM has a clear evidential interpretation that is absent in the integration period in the LATER model. As discussed previously, the mathematical model and the interpretation of the model variables, the external or internal properties being encoded, determine the processing model. There is no evidential integration period in the processing problem facing the animal in the delayed-saccade task. Thus, the processing models executed to meet the processing demands on the system are different models.19 Note that a particular dynamical system might play a role in executing multiple processing models simultaneously. The mapping requirement in R and D is not exclusive. The dynamical properties corresponding to the functions performed by a subsystem that play a role in executing some processing model can be mapped on to some model, M, while also being mapped on to some other model, M′. Such a case would constitute synchronic reuse. Importantly for the first type of reuse, the mere reoccurrence of a physical mechanism does not entail the implementation of the same dynamical system. Physical mechanisms possess many dynamical properties, and different dynamical systems can be implemented by the same physical mechanism in virtue of this diversity of dynamical properties. Granted this sort of flexibility, simply detailing the physical mechanism doesn’t sufficiently describe the physical mechanism’s cognitive contribution to the system. That contribution has to be determined by looking at the processing model the system is executing and then establishing how that execution occurs via the system’s dynamics. Since physical mechanisms have an array of dynamical properties, one processing model may be executed by one subset of dynamical properties while another model is executed by a different subset. Hence, different processing models can be executed by different dynamical systems implemented on the same physical mechanism. The second type of reuse describes the same dynamical system implemented by distinct physical mechanisms. Recall that dynamical systems are identical to subsets of the dynamical properties of physical mechanisms. These physical mechanisms are individuated according to the standards and kinds of the relevant science. In the case of neuroscience, physical mechanisms are individuated on the basis of neural properties such as types of neurons, types of neurotransmitters, connectivity profiles, response properties, and so forth. Distinct types of neuronal mechanisms may share some of their dynamical properties while not sharing others. Some such sets of dynamical properties may in fact constitute identical dynamical systems in virtue of being the same pattern of changes in the underlying mechanisms. Thus, type-distinct physical mechanisms can possess identical subsets of their dynamical properties, implementing the same dynamical system. Other accounts of reuse can’t capture these instances of reuse because they focus on the physical systems.20 A clear example of this can be seen by contrasting the case study of perceptual decisions under noise, illustrated above, with a case study drawn from research into strategic foraging decisions. The case study concerns a class of decisions regarding when an animal should depart a depleting food source. When foraging in an environment where rewards are clustered in patches, animals must determine when to leave the current, depleting patch to travel to a new one. A mathematical model, the marginal value theorem (MVT) (Charnov [1976]), describes the processing necessary for optimal patch-leaving decisions. The MVT determines the energy intake rate as a function of the value associated with the food item, the handling time for consuming the item, the average travel time between patches, and other environmental variables. Maximizing this rate results in a simple decision rule for leaving a patch: leave a patch when the instantaneous energy intake rate in the current patch falls below the average intake rate for all patches in that environment. In order to investigate the neural mechanisms of patch leaving decisions, Hayden and colleagues devised a simulacrum of the patch leaving problem suitable for neural recording (the patch-leaving task in Figure 3). The animal must make a decision about whether to continue to exploit the current patch or to leave the current patch. If the animal chooses to exploit the current patch, they receive a juice reward, which decreases in size as they repeatedly choose to exploit the same patch. If the animal chooses to leave the current patch, a travel time-out penalty is enforced; but at the end of this delay, the reward for the patch resets to its initial value. Figure 3. View largeDownload slide The patch-leaving task. Adapted from (Hayden et al. [2011], p. 934). Figure 3. View largeDownload slide The patch-leaving task. Adapted from (Hayden et al. [2011], p. 934). Recording neurons in the anterior cingulate cortex sulcus (ACCs), a medial prefrontal cortical structure, Hayden and colleagues uncovered a novel neurophysiological implementation of the integrate-to-bound system. The increase in patch residence time as the monkeys foraged in a patch is encoded by an increase in the peri-saccadic peak response in ACCs neurons (see, for example, neuron in Figure 4). The firing rate just prior to a decision rises over the course of the patch, akin to an integration. For similar travel times to a new patch, the firing rates in those neurons also rose to a common threshold for different patch leave times. Furthermore, for similar travel times, the initial firing rates at the beginning of the patch were the same. All three elements—of baseline, integration, and threshold—present in the LIP implementation of an integrate-to-bound system are also present in the ACC data collected during the foraging task, suggesting the same system is implemented in both regions. Figure 4. View largeDownload slide Example neuron from the anterior cingulate cortex, recorded during the patch-leaving task. These cells exhibit a peri-saccadic response that increases over the duration of foraging in a patch. Adapted from (Hayden et al. [2011], p. 935). Figure 4. View largeDownload slide Example neuron from the anterior cingulate cortex, recorded during the patch-leaving task. These cells exhibit a peri-saccadic response that increases over the duration of foraging in a patch. Adapted from (Hayden et al. [2011], p. 935). Despite implementing the same dynamical system, the physiological mechanisms are different in LIP and ACC. In the case of LIP and the RDMT, integrative activity occurring on the timescale of hundreds of milliseconds during a single trial is implemented by an increase in firing by individual neurons. In the case of ACC and the patch-foraging task, integrative activity occurring on the timescale of tens of seconds over many trials is implemented by an increase in the peak level of transient activation of individual neurons during a trial. These two different time courses indicate that distinct physiological mechanisms implement the integrate-to-bound system. The case studies of perceptual decision-making under noise and of strategic foraging decisions illustrate this second type of reuse. Different instances of the same type of dynamical system, the integrate-to-bound system, are implemented by distinct types of physical mechanisms: a temporally continuous increase in firing rates in LIP during the RDMT and a temporally discontinuous increase in firing rates in ACC during the patch-leaving task. The integrate-to-bound system has also been seen in prefrontal cortical areas at the level of the neuronal population, with the population exhibiting integration through its state space (Mante et al. [2013]). Thus, different physical mechanisms, at the level of the single neuron or neuronal population, can implement the same dynamical system. The processing models being executed, however, are distinct. In the case of LIP, the system executes an integration of evidence function. The specific function executed by the dynamical system in ACC remains unclear, though among the possibilities are assessments of opportunity cost, and a comparison of instantaneous-reward intake rates to average-reward intake rates. These two case studies illustrate the execution of distinct cognitive functions—in this case, perceptual and strategic decisions—through the repeated implementation of the same type of dynamical system. An objection to this second type of reuse of dynamical systems might be raised on the definition of a subsystem.21 The second type of reuse describes distinct instances of the same dynamical system that are implemented by distinct physical mechanisms. R requires that the same subsystem executes the cognitive function. If the substrates for these dynamical systems are different, then how can the dynamical system be the same? The second type of reuse satisfying D does not appear to be reuse at all; rather, it appears to be a banal instance of use. In reply, the objection mistakes the entity of reuse. The entities being used here are not the entire physical substrate. Rather, a subset of the dynamics of the substrate determines the identity of the dynamical system. D claims that dynamical systems, and not their physical substrates, are being reused, where dynamical systems are identified with subsets of parts and processes of physical mechanisms. In particular, dynamical systems reflect the processes in which the parts of physical mechanisms engage.22 Since dynamical systems are defined in terms of subsets of the parts and processes of the physical substrate, amounting to equivalence classes of physical substrates, the systems being used are of the type corresponding to that equivalence class. This discussion points to an ambiguity in R that arises for the case of dynamical subsystems. As defined above, subsystems are sets of parts and processes. In past discussions, the reused subsystem has been interpreted as denoting a set of parts. In this first sense of reuse, the same set of parts performs the same function for executing distinct cognitive functions. Consider again neurons in area LIP. Neurons in LIP encode quantity and value. That is, the activation (F) of these neurons (s) correlates with the number of items in a set (M) (Roitman et al. [2007]), and the activity (F) of these neurons (s′) in a different context correlates with the value of a choice (M′) (Platt and Glimcher [1997]). In fact, these neurons are taken to encode a number of other cognitive variables, not just number and value, but also shape (Janssen et al. [2008]) and time (Janssen and Shadlen [2005]). LIP neurons are reused for a diverse range of cognitive functions. This first sense of reuse is the one that has received the most attention in the literature (for example, Anderson [2007a], [2007b], [2007c], [2010], [2014]; McCaffrey [2015]). The explanatory power of this sense of reuse derives from its suggestion that a system's pre-existing subsystems can be adapted for new uses. As Anderson ([2010], p. 246) notes, ‘evolutionary considerations might often favor reusing existing components for new tasks over developing new circuits de novo’. This sense of reuse is close ‘exaptation’ in the biology literature (Gould and Vrba [1982]). Structures are exapted in biology when they acquire a function distinct from the function for which the structure was selected. Unlike exaptation, however, subsystems are reused in this first sense when they retain their pre-existing function. The function of the subsystem is instead put to a new use for a distinct cognitive function. In this first sense of reuse, subsystems retain their function, but they are used for different overall capacities of the system. Second, a set of processes could be reused. In this second sense, the same set of processes, potentially possessed by different physical entities, performs the same function for executing distinct cognitive functions. A set of processes can recur with different parts underlying the processes. Subsystems are defined as sets of parts and processes; but there is no restriction against identifying subsystems solely on the basis of just a set of parts or just a set of processes. The most familiar instances of such reuse can be drawn from computer programming: different programs may contain distinct lines of code or distinct subprograms that nonetheless possess the same properties in virtue of performing the same operation, such as sorting values from lowest to highest. This second sense of reuse has been overlooked in discussions of reuse, yet is crucial for understanding the reuse of dynamical systems in cognitive systems.23 The reuse of the integrate-to-bound system for evidence integration in perceptual decision-making and for foraging computations in strategic decision-making well illustrates this second sense of reuse. Unlike the first sense of reuse, the explanatory power of this second sense does not necessarily derive from exaptation, but rather from the functional usefulness of the set of processes for executing some systemic function. Whereas on the first sense, reuse results from exaptation, in the second sense, reuse results from something like evolutionary convergence (McGhee [2011]). Often, convergence in biology describes similarity of structure; in the case of this second sense of reuse, it describes similarity of process. For the second sense of reuse, the similarity is in the functional properties of subsystems, how the subsystem changes over time, and how the component processes—the processes engaged in by the parts whatever those parts happen to be—are arranged. For example, the execution of both the evidence integration processes and foraging computations involve a baseline level of activity, followed by a period of integration, and culminate in a threshold. Because these properties are often described independently of their physical substrates, the same set of functional properties can be associated with distinct physical structures. This discussion suggests that dynamical systems can be described in two different ways that entail distinct types of reuse. On the one hand, if the dynamical subsystem includes the physical parts as well as the processes they engage in, then these systems may be reused in the first sense. On the other hand, if the dynamical subsystem includes the processes engaged in by the physical parts, whatever those may be, but excludes the parts themselves, then these systems may be reused in the second sense. In addition to the different compositions of the subsystem, these two types of reuse reflect different evolutionary justifications. The first sense of reuse justifies claims of reuse on exaptationist grounds. The second sense of reuse justifies claims of reuse on convergentist grounds. The latter type of reuse also suggests that certain processes possess particularly useful functional properties for executing cognitive functions such that these processes will tend to recur in cognitive systems. In sum, regardless of the identity of the physical mechanisms that implement the dynamical system, the same dynamical system may execute different processing models at different times. The same dynamical system is observed in area LIP executing processing for eye movements, perceptual decisions, and more (for example, for categorization, see Freedman and Assad [2006]). Furthermore, these are not instances of the same processing model, just for superficially different task environments, as was objected previously. And the same dynamical system in different areas can execute different models, as is the case for foraging and perceptual decisions. Thus, dynamical systems are reused to execute different cognitive functions. Before concluding, I would like to address an objection arising from a concern about the causal efficacy of dynamical systems. Earlier, I analysed the use and reuse of a subsystem in terms of that subsystem performing a function for a system. A natural interpretation of performing a function requires the possession of causal powers (for example, see Cummins [1975]). However, granted that dynamical systems are the relevant entity of reuse, a problem arises from considering the causal powers of such systems. What is a causal power? We might think of a causal power as the ability for something to stand in the cause–effect relation to another thing, specifically serving in the cause position of that relation. But this leads to a straightforward objection: dynamical systems can’t stand in causal relations qua dynamical system, as dynamical systems are token-identical to sets of dynamical properties of physical mechanisms, regardless of the physical entities whose dynamics are being analysed. But, the objection runs, only those physical entities can stand in causal relations.24 In reply, note that this problem arises for just about every function ascription where the entity to which a function is being ascribed is not at the level of the fundamental constituents of nature, supposing such a level exists. Consider the case of a heart. The function of a heart is to circulate blood, and is understood as the possession of a certain sort of causal power, namely, the causal power to circulate blood. But this causal power is also possessed by the aggregate organized components of hearts, that is, the ventricles, atria, cells, and so forth that compose a heart. Hearts, of course, are identical to this organized set of entities and activities, but that does not change the facts about causal powers. And similarly, the ventricles, atria, cells, and so forth are composed of more fundamental physical entities that will also possess the causal powers of the entities they compose. Causal powers ‘drain away’, to use Block’s ([2003]) phrase, and the causal powers of hearts (or ion channels or wings or …) will drain away to their fundamental physical constituents. I contend that dynamical systems are in the same conceptual position as physical mechanisms like hearts. Just as in the case of hearts, dynamical systems are identical to some set of physical properties, namely, dynamical properties. And, just as in the case of hearts, the more fundamental physical substrate possesses the causal powers corresponding to the ascribed function. So, while it may be the case that dynamical systems do not consist in the right sorts of properties to stand in causal relations, they are in the same conceptual position as other sorts of systems to which functions are ascribed, like hearts, albeit for different reasons.25 The problem has not gone away; however, it has been assimilated to a different, more general problem, and I’ll consider that progress enough. 5 Conclusion I have argued that an acceptable analysis of reuse will specify the objects of reuse; will account for the role of functions in use and reuse; will conceptually distinguish reuse and multi-use; will conceptually distinguish reuse, redundancy, and duplication; and will be empirically adequate. I analysed use in terms of a subsystem’s function in a cognitive system in executing a processing model, and reuse in terms of the same subsystem being used in executing distinct processing models. For any theory featuring reuse, using this analysis of reuse satisfies the second and third criteria. I then presented a specific application of this analysis of reuse for dynamical systems, using case studies drawn from cognitive neurobiology to illustrate different instances of reuse. I also distinguished two distinct types of reuse for dynamical systems and distinct evolutionary motivations for each type, grounded in whether the same parts or the same processes are being reused. Thus, my analysis of use and reuse and subsequent application of the notion of reuse satisfies all four criteria for an adequate analysis of reuse. Acknowledgements I would like acknowledge a number of pivotal influences that helped pull this article together. This article is a distant relative of a chapter of my dissertation at Duke University, Mindcraft: A Dynamical Systems Theory of Cognition, and many thanks to the various people who helped me in the process of putting together the dissertation. In philosophy, Alex Rosenberg, Felipe De Brigard, Karen Neander, Michael Anderson, and Michael Weisberg were all helpful. Walter Sinnott-Armstrong and Chris Peacocke are owed special thanks for their detailed criticisms. In neuroscience, Bill Newsome, Josh Gold, John Pearson, and Geoff Adams were all helpful. Michael Platt is owed special thanks for detailed criticism, as well as for keeping a philosopher around the laboratory. Finally, numerous anonymous reviewers provided many serious challenges over the course of the revisions for this article, and a deep thanks goes out to them. Footnotes 1 Note that while I focus on reuse for cognitive systems, the concept of reuse seems to be applicable to many different types of system: cognitive, computational, digestive, immune, and potentially many more. 2 Note that backups may not actively perform function F in system S, but that they do have the capacity to perform F in S. This connects to my analysis of use and reuse below. 3 These remarks are meant to be neutral about assimilating cognitive functions as causal role functions, selected effects functions, or some other type of function. 4 Pylyshyn ([1984]), amongst others, has extensively discussed such an assignment. Due to limitations on length, I won’t be able to delve into interesting questions such as how to make such an assignment, whether such models are fully representational, or whether there can be intensional as well as extensional interpretations of variables. 5 For ease of expression, in what follows, I will often elide the fact that the cognitive system’s subsystems execute only part of the processing model, and speak of these subsystems executing the processing model simpliciter. 6 Technically, and assuming that every input–output mapping defined by the processing model must be mapped on to some state of the system, the kind of mapping required is an injective homomorphism (a monomorphism). The details of the processing model are mapped on to the subsystem, but there can be residual detail in the subsystem that does not have a counterpart in the processing model. If some subset of the input–output pairs do not map on to states of the system, then the assumption is violated. Whether or not the system in fact executes the model in such a case will be sensitive to issues such as how many such pairs fail to map; whether there is some characteristic of the pairs, such as the level of grain or falling outside some interval, that fail to map; the particular behavioural challenge the processing model is meant to solve; and so on. In reality, of course, every processing model will determine some input–output pairs that fail to map precisely on to states of the system, and so resolving these issues will become important. 7 There is a range of equivalence relations of varying strength. Deeper investigation requires more room than provided here, but there are important outstanding issues to be resolved regarding model equivalence for executing cognitive functions. Many thanks to a referee for emphasizing this point. 8 Insofar as we think of a mathematical relation as simply a set of ordered n-tuples F such that F ⊂ ℜn where ℜn is the nth Cartesian product of the set of real numbers on itself, then there is no distinction between the two. However, this is only the case if our domain is the entire set of reals. This may hold for only some processing models, and this is not the case for subsystem states, unless there are uncountably many such states to map on to F. Also, though mathematical relations may be extensionally equivalent to such a set, still there may be intensional differences between the relation and the set of input–output pairs corresponding to the relation. 9 In actual cases of model execution, only a range of the model’s input–output pairs (presumably) at some level of grain will map on to the subsystem states. I leave aside these nuances. See also Footnote 6. 10 How reasonable is the denial of such a restriction? Prime facie, restricting to a single, unique model the range of models to which a set of subsystem states can be equivalent seems ad hoc and unjustified. There may be some sets of subsystem states that are in fact so restricted, but to build in such a restriction in principle—to include the restriction as part of the definition of weak equivalence—is surely far too strong. 11s may execute the processing model only in conjunction with other s′-subsystems of S performing functions F′, where s′ is not identical to s and F′ may be identical to F. I will often ignore this nicety. Also, the processing model must be one that reflects some cognitive function, and this will be assumed in the following. 12 Each of the following examples is selected because recent research suggests a single underlying function that contributes to distinct cognitive functions. Also, the processing models corresponding to each cognitive function are elided and the functions themselves used in their stead. I discuss a different example at much greater length below. 13 The population coding of the locus of attention or the endpoint of a saccade is a good deal more complicated than this but the details are not strictly relevant. 14 This distinction between cognitive function and systemic function appears in both the cognitive and biological literature. For example, Bock and von Wahlert ([1965]) distinguish form—the description of the material composition and arrangement of some biological feature—from function—the description of the causal properties of some feature arising from its form—and define the faculty as the combination of the form and function. Different faculties can result from the same form possessing different functions, prima facie outlining instances of reuse. Anderson ([2010]) and Bergeron ([2017]) draw a similar distinction. Bergeron argues that functional specification can proceed in two modes. The first mode regards the specification of a component’s cognitive role: ‘the function of a particular component […] specified relative to a cognitive process, or group of such processes, in which that component is thought to participate’ (Bergeron [2007], p. 181). The second mode regards the specification of a component’s cognitive working: ‘the component’s function […] specified relative to a cognitive process, or group of such processes, that it is thought to perform’ (Bergeron [2007], p. 181). Whereas I remain neutral on what types of functions these are, Bergeron assimilates the distinction between roles and workings to the distinction between teleological and non-teleological analyses. A teleological analysis ‘is one in which the functional aspect of a structure (or process) is specified relative to a certain end product (or goal) that the structure helps to bring about’; while in non-teleological analysis, ‘the functional aspect of a structure is specified relative to the particular working that is associated with this structure in any given context’ (Bergeron [2007], p. 182. As applied to cognitive phenomena, teleological analysis reflects cognitive role and non-teleological analysis reflects cognitive working. The distinctions between role and working, and teleological and non-teleological, prima facie seem orthogonal. Nothing in the concept of a cognitive working—of a component’s performing a function—eo ipso conflicts with or contradicts a teleological analysis. In particular, the cognitive working is in part determined by which model of processing the component systems of a system execute. Insofar as these systems were selected for in virtue of the role they play in implementing these models, a cognitive working can be given a teleological analysis. Likewise, cognitive roles can be given a non-teleological analysis. In particular, the role of a system in executing some processing model can be specified in terms of its contribution to the mapping that constitutes that execution. Anderson agrees that there is a distinction between the functions of local circuits, or circumscribed neural areas, and cognitive functions, invoking the language of ‘workings’ and ‘uses’ from Bergeron (Anderson [2010]). However, he does not take a stand on the type of function being invoked. On my account, the systemic function of a subsystem roughly corresponds to the subsystem’s workings, and the (part of the) processing model the subsystem executes roughly corresponds to the subsystem’s roles. 15 Many thanks to an anonymous reviewer for suggesting this objection. 16 Anderson makes a similar point, noting that ‘it could be that the dynamic response properties of local circuits are fixed, and that cognitive function is a matter of tying together circuits with the right (relative) dynamic response properties’ (Anderson [2010], p. 265). These dynamic response properties need not have been used before being recruited during the learning process to execute the new cognitive model. 17 This approach to multi-use results in something of a challenge: Different mathematical descriptions are available for the same function. Since there are different mathematical descriptions, does that entail that multi-use can occur simply by changing the mathematical description of the function attributed to the system? Multi-use should not result merely from redescription. The notion of systemic function is thus something more than the mathematical function that is part of the mathematical description of the systemic function. 18 See (Gold and Shadlen [2001], [2002], [2007]) for extensive discussion of this research. 19 The objector could persist, arguing that the integrative activity in the delayed saccade task reflects the evidence about the arrival of the go cue. Perhaps this is right, but only empirical investigation can confirm the hypothesis, perhaps by comparing different temporal circumstances for the go cue. Regardless, the presence of the go cue in this task did not reflect noisy evidence about a perceptual decision, so it still appears to be an instance of a distinct processing problem. 20 Many thanks to an anonymous reviewer for stressing this. 21 Many thanks to an anonymous reviewer for pointing out this potential confusion. 22 Note that this does not exclude all types of parts from being included in dynamical systems, only those that are part of the implementing substrate for the dynamical system. Dynamical systems could include dynamical parts, for example, or other types of functional parts, supposing such an analysis of these types of parts could be provided. 23 Many thanks to a referee for emphasizing the importance of this point. 24 This may not be true on some analyses of causation, such as Woodward’s ([2003]) interventionist approach. I think that, in this case, the objection carries force. 25 I think that a careful conceptual analysis of the relation between function ascription and causal role ascription might be fruitful here. In particular, the physical mechanism might possess the causal powers, but nevertheless the function is ascribed to the dynamical system. Compare the case of wings and flight: the causal powers might devolve on to the fundamental physical constituents of wings; nonetheless, the function of providing lift (or what-have-you) is ascribed to the wings. The difference, of course, is that wings (or hearts) are physical mechanisms—that is, mechanisms that in part denote physical entities—whereas dynamical systems do not. I maintain that this difference does not matter for ascribing functions. References Anderson M. L. [ 2007a ]: ‘ Evolution of Cognitive Function via Redeployment of Brain Areas ’, The Neuroscientist , 13 , pp. 13 – 21 . Google Scholar CrossRef Search ADS Anderson M. L. [ 2007b ]: ‘ The Massive Redeployment Hypothesis and the Functional Topography of the Brain ’, Philosophical Psychology , 20 , pp. 143 – 74 . Google Scholar CrossRef Search ADS Anderson M. L. [ 2007c ]: ‘ Massive Redeployment, Exaptation, and the Functional Integration of Cognitive Operations ’, Synthese , 159 , pp. 329 – 45 . Google Scholar CrossRef Search ADS Anderson M. L. [ 2010 ]: ‘ Neural Reuse: A Fundamental Organizational Principle of the Brain ’, Behavioral and Brain Sciences , 33 , pp. 245 – 66 . Google Scholar CrossRef Search ADS Anderson M. L. [ 2014 ]: After Phrenology , Oxford : Oxford University Press . Barash S. , Bracewell R. M. , Fogassi L. , Gnadt J. W. , Andersen R. A. [ 1991 ]: ‘ Saccade-Related Activity in the Lateral Intraparietal Area, II: Spatial Properties ’, Journal of Neurophysiology , 66 , pp. 1109 – 24 . Google Scholar CrossRef Search ADS Bergeron V. [ 2007 ]: ‘ Anatomical and Functional Modularity in Cognitive Science: Shifting the Focus ’, Philosophical Psychology , 20 , pp. 175 – 95 . Google Scholar CrossRef Search ADS Block N. [ 2003 ]: ‘ Do Causal Powers Drain Away? ’, Philosophy and Phenomenological Research , 67 , pp. 133 – 50 . Google Scholar CrossRef Search ADS Bock W. J. and von Wahlert G. [ 1965 ]: ‘ Adaptation and the Form–Function Complex ’, Evolution , 19 , pp. 269 – 99 . Google Scholar CrossRef Search ADS Carpenter R. and Williams M. [ 1995 ]: ‘ Neural Computation of Log Likelihood in Control of Saccadic Eye Movements ’, Nature , 377 , pp. 59 – 62 . Google Scholar CrossRef Search ADS Charnov E. L. [ 1976 ]: ‘ Optimal Foraging, the Marginal Value Theorem ’, Theoretical Population Biology , 9 , pp. 129 – 36 . Google Scholar CrossRef Search ADS Churchland P. M. [ 2012 ]: Plato's Camera , Cambridge, MA: MIT Press . Cummins R. [ 1975 ]: ‘ Functional Analysis ’, Journal of Philosophy , 72 , pp. 741 – 65 . Google Scholar CrossRef Search ADS Dehaene S. [ 2005 ]: ‘Evolution of Human Cortical Circuits for Reading and Arithmetic: The “Neuronal Recycling” Hypothesis’, in S. Dehaene, J.-R. Duhamel, M. D. Hauser and G. Rizzolatti (eds), From Monkey Brain to Human Brain , Cambridge, MA : MIT Press , pp. 133 – 57 . Devinsky O. , Morrell M. J. , Vogt B. A. [ 1995 ]: ‘ Contributions of Anterior Cingulate Cortex to Behaviour ’, Brain , 118 , pp. 279 – 306 . Google Scholar CrossRef Search ADS Eliasmith C. [ 2013 ]: How to Build a Brain: A Neural Architecture for Biological Cognition , Oxford : Oxford University Press . Google Scholar CrossRef Search ADS Freedman D. J. , Assad J. A. [ 2006 ]: ‘ Experience-Dependent Representation of Visual Categories in Parietal Cortex ’, Nature , 443 , pp. 85 – 8 . Google Scholar CrossRef Search ADS Gallese V. [ 2008 ]: ‘ Mirror Neurons and the Social Nature of Language: The Neural Exploitation Hypothesis ’, Social Neuroscience , 3 , pp. 317 – 33 . Google Scholar CrossRef Search ADS Gallese V. , Lakoff G. [ 2005 ]: ‘ The Brain's Concepts: The Role of the Sensory-Motor System in Conceptual Knowledge ’, Cognitive Neuropsychology , 22 , pp. 455 – 79 . Google Scholar CrossRef Search ADS Gold J. I. and Shadlen M. N. [ 2001 ]: ‘ Neural Computations That Underlie Decisions about Sensory Stimuli ’, Trends Cognitive Science , 5 , pp. 10 – 6 . Google Scholar CrossRef Search ADS Gold J. I. , Shadlen M. N. [ 2002 ]: ‘ Banburismus and the Brain: Decoding the Relationship between Sensory Stimuli, Decisions, and Reward ’, Neuron , 36 , pp. 299 – 308 . Google Scholar CrossRef Search ADS Gold J. I. , Shadlen M. N. [ 2007 ]: ‘ The Neural Basis of Decision Making ’, Annual Review of Neuroscience , 30 , pp. 535 – 74 . Google Scholar CrossRef Search ADS Gottlieb J. P. , Kusunoki M. , Goldberg M. E. [ 1998 ]: ‘ The Representation of Visual Salience in Monkey Parietal Cortex ’, Nature , 391 , pp. 481 – 4 . Google Scholar CrossRef Search ADS Gould S. J. and Vrba E. S. [ 1982 ]: ‘ Exaptation—A Missing Term in the Science of Form ’, Paleobiology , 8 , pp. 4 – 15 . Google Scholar CrossRef Search ADS Haugeland J. [ 1985 ]: Artificial Intelligence: The Very Idea , Cambridge, MA : MIT Press . Hayden B. Y. , Pearson J. M. and Platt M. L. [ 2011 ]: ‘ Neuronal Basis of Sequential Foraging Decisions in a Patchy Environment ’, Nature Neuroscience , 14 , pp. 933 – 9 . Google Scholar CrossRef Search ADS Hernandez A. , Zainos A. , Romo R. [ 2002 ]: ‘ Temporal Evolution of a Decision-Making Process in Medial Premotor Cortex ’, Neuron , 33 , pp. 959 – 72 . Google Scholar CrossRef Search ADS Holroyd C. B. , Yeung N. [ 2012 ]: ‘ Motivation of Extended Behaviors by Anterior Cingulate Cortex ’, Trends in Cognitive Sciences , 16 , pp. 122 – 8 . Google Scholar CrossRef Search ADS Hurley S. [ 2005 ]: ‘ The Shared Circuits Hypothesis: A Unified Functional Architecture for Control, Imitation, and Simulation ’, Perspectives on Imitation , 1 , pp. 177 – 94 . Janssen P. and Shadlen M. N. [ 2005 ]: ‘ A Representation of the Hazard Rate of Elapsed Time in Macaque Area LIP ’, Natural Neuroscience , 8 , pp. 234 – 41 . Google Scholar CrossRef Search ADS Janssen P. , Srivastava S. , Ombelet S. and Orban G. A. [ 2008 ]: ‘ Coding of Shape and Position in Macaque Lateral Intraparietal Area ’, Journal of Neuroscience , 28 , pp. 6679 – 90 . Google Scholar CrossRef Search ADS Jungé J. A. , Dennett D. C. [ 2010 ]: ‘ Multi-use and Constraints from Original Use ’, Behavioral and Brain Sciences , 33 , pp. 277 – 8 . Google Scholar CrossRef Search ADS Kepecs A. , Uchida N. , Mainen Z. F. [ 2006 ]: ‘ The Sniff as a Unit of Olfactory Processing ’, Chemical Senses , 31 , pp. 167 – 79 . Google Scholar CrossRef Search ADS Krauzlis R. J. , Lovejoy L. P. , Zénon A. [ 2013 ]: ‘ Superior Colliculus and Visual Spatial Attention ’, Annual Review of Neuroscience , 36 , pp. 165 – 82 . Google Scholar CrossRef Search ADS Lee C. , Rohrer W. H. , Sparks D. L. [ 1988 ]: ‘ Population Coding of Saccadic Eye Movements by Neurons in the Superior Colliculus ’, Nature , 332 , pp. 357 – 60 . Google Scholar CrossRef Search ADS Louie K. , Grattan L. E. , Glimcher P. W. [ 2011 ]: ‘ Value-Based Gain Control: Divisive Normalization in Parietal Cortex ’, The Journal of Neuroscience , 31 , pp. 10627 – 39 . Google Scholar CrossRef Search ADS MacDonald A. W. , Cohen J. D. , Stenger V. A. , Carter C. S. [ 2000 ]: ‘ Dissociating the Role of the Dorsolateral Prefrontal and Anterior Cingulate Cortex in Cognitive Control ’, Science , 288 , pp. 1835 – 8 . Google Scholar CrossRef Search ADS Machamer P. , Darden L. , Craver C. F. [ 2000 ]: ‘ Thinking about Mechanisms’ , Philosophy of Science , 67 , pp. 1 – 25 . Google Scholar CrossRef Search ADS Maia T. V. , Cooney R. E. , Peterson B. S. [ 2008 ]: ‘ The Neural Bases of Obsessive–Compulsive Disorder in Children and Adults ’, Development and Psychopathology , 20 , pp. 1251 – 83 . Google Scholar CrossRef Search ADS Mante V. , Sussillo D. , Shenoy K. V. , Newsome W. T. [ 2013 ]: ‘ Context-dependent Computation by Recurrent Dynamics in Prefrontal Cortex ’, Nature , 503 , pp. 78 – 84 . Google Scholar CrossRef Search ADS Marr D. [ 1982 ]: Vision , New York : Henry Holt . Mazurek M. E. , Roitman J. D. , Ditterich J. , Shadlen M. N. [ 2003 ]: ‘ A Role for Neural Integrators in Perceptual Decision Making ’, Cerebral Cortex , 13 , pp. 1257 – 69 . Google Scholar CrossRef Search ADS McCaffrey J. B. [ 2015 ]: ‘ The Brain’s Heterogeneous Functional Landscape ’, Philosophy of Science , 82 , pp. 1010 – 22 . Google Scholar CrossRef Search ADS Neander K. [ 1991 ]: ‘ Functions as Selected Effects: The Conceptual Analyst's Defense ’, Philosophy of Science , 58 , pp. 68 – 184 . Google Scholar CrossRef Search ADS Németh G. , Hegedüs K. , Molnâr L. [ 1988 ]: ‘ Akinetic Mutism Associated with Bicingular Lesions: Clinicopathological and Functional Anatomical Correlates ’, European Archives of Psychiatry and Neurological Sciences , 237 , pp. 218 – 22 . Google Scholar CrossRef Search ADS Paus T. [ 2001 ]: ‘ Primate Anterior Cingulate Cortex: Where Motor Control, Drive, and Cognition Interface ’, Nature Reviews Neuroscience , 2 , pp. 417 – 24 . Google Scholar CrossRef Search ADS Platt M. L. , Glimcher P. W. [ 1997 ]: ‘ Responses of Intraparietal Neurons to Saccadic Targets and Visual Distractors ’, Journal of Neurophysiology , 78 , pp. 1574 – 89 . Google Scholar CrossRef Search ADS Pylyshyn Z. W. [ 1984 ]: Computation and Cognition , Cambridge : Cambridge University Press . Reddi B. , Carpenter R. [ 2000 ]: ‘ The Influence of Urgency on Decision Time ’, Nature Neuroscience , 3 , pp. 827 – 30 . Google Scholar CrossRef Search ADS Roitman J. D. , Brannon E. M. and Platt M. L. [ 2007 ]: ‘ Monotonic Coding of Numerosity in Macaque Lateral Intraparietal Area ’, PLoS Biology , 5 , p. e208 . Google Scholar CrossRef Search ADS Roitman J. D. , Shadlen M. N. [ 2002 ]: ‘ Response of Neurons in the Lateral Intraparietal Area during a Combined Visual Discrimination Reaction Time Task ’, Journal of Neuroscience , 22 , pp. 9475 – 89 . Google Scholar CrossRef Search ADS Romo R. , Salinas E. [ 2001 ]: ‘ Touch and Go: Decision-Making Mechanisms in Somatosensation ’, Annual Review of Neuroscience , 24 , pp. 107 – 37 . Google Scholar CrossRef Search ADS Schall J. D. and Thompson K. G. [ 1999 ]: ‘ Neural Selection and Control of Visually Guided Eye Movements ’, Annual Review of Neuroscience , 22 , pp. 241 – 59 . Google Scholar CrossRef Search ADS Shagrir O. [ 2010 ]: ‘ Marr on Computational-Level Theories ’, Philosophy of Science , 77 , pp. 477 – 500 . Google Scholar CrossRef Search ADS Shenhav A. , Botvinick M. M. , Cohen J. D. [ 2013 ]: ‘ The Expected Value of Control: An Integrative Theory of Anterior Cingulate Cortex Function ’, Neuron , 79 , pp. 217 – 40 . Google Scholar CrossRef Search ADS Sparks D. L. [ 1986 ]: ‘ Translation of Sensory Signals into Commands for Control of Saccadic Eye Movements: Role of Primate Superior Colliculus ’, Physiological Reviews , 66 , pp. 118 – 71 . Google Scholar CrossRef Search ADS Uchida N. , Kepecs A. , Mainen Z. F. [ 2006 ]: ‘ Seeing at a Glance, Smelling in a Whiff: Rapid Forms of Perceptual Decision Making ’, Nature Reviews Neuroscience , 7 , pp. 485 – 91 . Google Scholar CrossRef Search ADS Uchida N. , Mainen Z. F. [ 2003 ]: ‘ Speed and Accuracy of Olfactory Discrimination in the Rat ’, Nature Neuroscience , 6 , pp. 1224 – 9 . Google Scholar CrossRef Search ADS Uka T. , DeAngelis G. C. [ 2003 ]: ‘ Contribution of Middle Temporal Area to Coarse Depth Discrimination: Comparison of Neuronal and Psychophysical Sensitivity ’, The Journal of Neuroscience , 23 , pp. 3515 – 30 . Google Scholar CrossRef Search ADS Usher M. , McClelland J. L. [ 2001 ]: ‘ The Time Course of Perceptual Choice: The Leaky, Competing Accumulator Model ’, Psychological Review , 108 , p. 550 . Google Scholar CrossRef Search ADS Wang X.-J. [ 2012 ]: ‘ Neural Dynamics and Circuit Mechanisms of Decision-Making ’, Current Opinion in Neurobiology , 22 , pp. 1039 – 46 . Google Scholar CrossRef Search ADS Wang X. J. [ 2002 ]: ‘ Probabilistic Decision Making by Slow Reverberation in Cortical Circuits ’, Neuron , 36 , pp. 955 – 68 . Google Scholar CrossRef Search ADS Wolpert D. M. , Doya K. , Kawato M. [ 2003 ]: ‘ A Unifying Computational Framework for Motor Control and Social Interaction ’, Philosophical Transactions of the Royal Society of London Series B , 358 , pp. 593 – 602 . Google Scholar CrossRef Search ADS © The Author(s) 2017. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices) http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The British Journal for the Philosophy of Science Oxford University Press

Cognitive Recycling

Loading next page...
 
/lp/ou_press/cognitive-recycling-EBmLN4IFAz
Publisher
Oxford University Press
Copyright
© The Author(s) 2017. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com
ISSN
0007-0882
eISSN
1464-3537
D.O.I.
10.1093/bjps/axx024
Publisher site
See Article on Publisher Site

Abstract

Abstract Theories in cognitive science, and especially cognitive neuroscience, often claim that parts of cognitive systems are reused for different cognitive functions. Philosophical analysis of this concept, however, is rare. Here, I first provide a set of criteria for an analysis of reuse, and then I analyse reuse in terms of the functions of subsystems. I also discuss how cognitive systems execute cognitive functions, the relation between learning and reuse, and how to differentiate reuse from related concepts like multi-use, redundancy, and duplication of parts. Finally, I illustrate how my account captures the reuse of dynamical subsystems as unveiled by recent research in cognitive neurobiology. This recent research suggests two different evolutionary justifications for reuse claims. 1 Introduction 2 Criteria for Analyses of Reuse 3 Use and Reuse 4 The Reuse of Dynamical Systems in Cognitive Systems 5 Conclusion 1 Introduction Theories in cognitive neuroscience often claim that parts of the brain are used over and over for different cognitive functions. This can be generalized as a principle of reuse for different cognitive subsystems (using ‘subsystems’ as a generic term, ranging over mechanisms, components, areas, and so on). Such reuse claims occur in many different accounts of cognition (Wolpert et al. [2003]; Dehaene [2005]; Gallese and Lakoff [2005]; Hurley [2005]; Gallese [2008]), though philosophical discussion of the concept of reuse is rare (Anderson [2007a], [2007b], [2007c], [2010], [2014]). Herein, I analyse reuse in terms of subsystems of cognitive systems being used to execute different cognitive functions. Reuse appears in a range of theories of cognitive function, yet an analysis of reuse has not yet been offered. By providing an analysis of the concept, the claims of various theories can be assessed and clarified. Given the prevalence of the concept in contemporary theories of cognition and the breadth of examples of reuse, such an account is a necessary and timely addition to the discussion. In addition to the importance of analysing reuse for assessing cognitive theories, reuse describes a certain sort of regularity in cognitive systems. In particular, subsystems regularly reoccur in cognitive systems for different cognitive functions. Analysing reuse provides some insight into that type of regularity. Finally, an account of reuse provides evolutionary grounds for understanding and explaining some cognitive phenomena. Evolution is a tinkerer, reusing a subsystem for some function instead of inventing—or reinventing—one (Anderson [2010]). Describing the contribution of these subsystems to cognitive systems by describing their role in allowing possession of cognitive capacities can provide evidence for the origins of structures and thereby ground explanations of biological phenomena. In this article, I first discuss criteria for a conceptual analysis of reuse. What are the constraints on a satisfactory conceptual analysis of reuse? Missing from previous analyses of reuse is an analysis of the concept of use. Use is a necessary precondition for reuse; after I present my analysis of use of a subsystem of a cognitive system, I present my analysis of the concept of reuse. I argue that the details of my account satisfy the criteria laid out for a satisfactory conceptual analysis of reuse and briefly illustrate how applying the analysis captures a number of examples of reuse drawn from research in cognitive neuroscience. Finally, I apply this approach by formulating two specific reuse hypotheses for dynamical systems as revealed by recent research in cognitive neurobiology—the study of cognitive function from electrophysiological recordings of single neurons and neuronal populations—which reflect two distinct evolutionary justifications for reuse claims. 2 Criteria for Analyses of Reuse What are the criteria for a satisfactory analysis of reuse? I outline four distinct criteria for a satisfactory analysis: an analysis should specify what is being reused; should analyse the use and reuse of an entity; should distinguish reuse from redundancy and duplication of parts; and should be empirically adequate. Later, I will construct an analysis of reuse whose details are general enough such that a cognitive theory can use the concept and satisfy these constraints.1 The first criterion for an analysis of reuse is to specify what, precisely, is being reused. Are the mechanisms of the cognitive system being reused? Circuits? Computations? Are the organizational features, such as the architecture or infrastructure, of the system being reused? Any particular theory of cognition may have multiple objects of reuse, entailing that the physical mechanisms are reused and the representations are reused. Different objects of reuse may have different reuse conditions, the properties of the object that determine what counts as reuse. Thus, different objects of reuse may require different analyses of the concept. Reuse has heretofore been cast as the reuse of neural entities. For example, Anderson argues that ‘brain regions have fixed low-level functions (“workings”) that are put to many high-level “uses” ’ (Anderson [2010], p. 265). While the focus on neural reuse per se is commendable, capturing a concept used to describe how different neural areas are reused in cognition, a non-neurocentric analysis of reuse is desirable. Cognitive systems are often considered medium independent (Haugeland [1985]), capable of being made of different physical constituents. On some theories of cognition, brains, computers, swarms, aliens, and other complex systems may all be classified as cognitive systems. If reuse captures a cognitive phenomenon as opposed to a strictly neural one, then an analysis of reuse should remain neutral with regard to the constitutive substrate when specifying the entities being reused. The second criterion for an analysis of reuse is to analyse what it is for something to be used and then subsequently reused. Since use is related to function, such an analysis of reuse will describe the relationship between use and function (Anderson [2010]). Reuse can be tackled after analysing use. Furthermore, reuse should be distinguished from multi-use. Some entity can be used multiple times in different ways for different cognitive functions. Multi-use denotes some particular cognitive entity having such multiple uses. In order to establish the reuse of some entity, instead of the weaker multi-use, the function of whatever is being reused needs to be specified, accompanied by a demonstration that this same entity operating in the same fashion is used to accomplish different goals (Jungé and Dennett [2010]). The second criterion for an analysis of reuse is to analyse use in a way that accounts for the central role of functions in reuse and that differentiates reuse from multi-use. Third, an analysis of reuse should keep reuse distinct from redundancy and duplicate entities. Redundancy occurs when there are backup entities in a system. These backup entities are distinct from the primary entity, but they may be of the same type and perform the same function.2 However, we do not want to count these redundancies as being reused; intuitively, a redundant entity is not one that is currently being used let alone reused. Entities are duplicated when two distinct entities are incapable of performing some particular function on their own. In this case, the function is overdetermined for the system. However, again intuitively, duplicate entities over-determining an outcome should be conceptually distinct from reuse. Keeping these concepts distinct is the third criterion for an analysis of reuse. Finally, a reuse theory should be empirically adequate. Given the appearance of the concept in various theories of cognition and the putative reuse of many different cognitive entities, a philosophical analysis of reuse should strive to cover as wide a range of theories and phenomena as possible. This is not to say that these examples will always reflect true claims about use, multi-use, or reuse. An analysis of reuse may result in certain judgements that are counter-intuitive or counter-culture compared to the accepted instances of the concept. For example, some putative instances of reuse may be judged as instances of multi-use. But an analysis that results in fewer such reclassifications is, ceteris paribus, better than one that results in more. Likewise, a philosophical analysis should strive to find the concept of reuse that unifies the most instances of the unexplicated concept. Again ceteris paribus, a philosophical account that distinguishes a core concept of reuse, or fewer core concepts of reuse, is preferable to one that distinguishes multiple, or more, core concepts of reuse. In sum, an analysis of reuse should satisfy four basic criteria: First, the analysis should specify what is being reused, while respecting the potential range of physical substrates for cognitive systems. Second, the analysis should appeal to functions and differentiate reuse from multi-use. Third, the analysis should keep the concept of reuse distinct from redundancy and from duplicate entities. Fourth, the analysis should be empirically adequate, accommodating the evidence for reuse as best as possible while also unifying the uses of the concept of reuse as best as possible. In the following, I will present an analysis of the concept of reuse suitable for a range of theories of cognition, starting with an analysis of the use of a subsystem. 3 Use and Reuse Having stipulated a number of criteria for an adequate reuse theory, I will now develop an analysis of reuse general enough for a range of theories of cognition. I apply this analysis to some examples of reuse drawn from cognitive neuroscience, as well as later illustrating the analysis with examples of the reuse of dynamical systems. The key to understanding reuse is to start with an analysis of the use of an entity. Here, I focus on subsystems of cognitive systems: collections of parts such as neurons, neurotransmitters, and so forth in the case of the brain, or layers of units, connections between units, and so forth in the case of neural networks. A subsystem is some subset of the parts and processes of a system. This broad definition is meant to capture the many different ways systems can be analysed, as any collection of parts and processes of a system will be a subsystem. Of course, many subsystems are organized collections of such parts and processes, where the parts stand in particular relations to each other and are organized into processes so as to produce the products of those processes (cf. Machamer et al. [2000]); but on the broad use of the term herein, any collection of parts and processes constitutes a subsystem. A subsystem is used when it performs some function for the cognitive system, where that function is in some way related to the cognitive functions of the system. Subsystem s performs some function, F, for cognitive system S, such that performing that function totally or in part allows S to perform some cognitive function, C. In order to analyse use, then, a brief digression on cognitive functions is required.3 Cognitive systems, such as some organisms, are faced with a range of behavioural challenges that reflect the particular exigencies they face in navigating and manipulating the world. Different organisms face different challenges, and the particular challenges faced by an organism reflect the context within which the organism is acting. Perception, decision-making, and other cognitive functions require the organism to keep track of objects in the environment, the various rewards and uncertainties associated with actions, and so forth. An animal in the forest will face the need to detect predators occluded by foliage, whereas an animal in the desert will face different predatory dangers. Humans have perhaps the most diverse environments to navigate. Different objects will require processing different types of properties, different rewarding contexts will place different evaluative demands on the organism, different perceptual contexts will require different attentional commitments, and so forth. Mathematical models describe the processing used to overcome these challenges. These processing models specify the variables that the system encodes, the mathematical relationships between these variables, and the way that these encoded variables are transformed in order to accomplish the goal of the organism, namely, to respond to the behavioural challenge (Marr calls such an account a ‘theory’ of the relevant cognitive domain; Marr [1982]; Shagrir [2010]). For example, consider the case of a subject making a perceptual decision when the sensory evidence is equivocal, such as choosing the direction of motion of a field of dots, some proportion of which are moving in the same direction. The subject must integrate evidence from the environment in order to make the decision. This process can be described in various mathematical ways, including using a formalism known as the drift diffusion model (DDM), which is provably optimal for this class of problems (Gold and Shadlen [2002]). The DDM starts with a prior log-odds ratio—the ratio of the prior probabilities of the dots moving left or right—and then computes the likelihood fraction for the observed motion evidence over some period of time, adding this weight of evidence to the priors to arrive at a posterior log-odds ratio. If a threshold for the posterior ratio has not yet been reached, the process iterates until a threshold is reached—for example, the system runs out of evidence (say, if the dot display disappears) or some other stopping criterion is met. Once the process halts, a decision is made according to some decision rule, based on, for example, which threshold was reached, or which direction of motion had the greater amount of evidence. Numerous other examples of processing models can be adduced. Importantly, these processing models must be interpreted. Simply providing a mathematical function with uninterpreted variables and parameters is insufficient to specify the processing model. In addition to the parameters, the variables, and their interrelations, a referential assignment for the variables and parameters in the model must be given.4 This process of referential assignment provides an interpretation of the processing model. Only once an interpretation is provided can a model be identified. Otherwise, a model would consist in an empty formalism, some equation that fails to connect with the environmental properties that must be tracked and the variables transformed for adaptive behaviour. These processing models are distinct from, though executed by, the subsystems of cognitive systems. Processing models are executed by cognitive systems in virtue of the subsystems jointly executing the processing model, where a cognitive subsystem executes a part of a processing model only if there is a mapping between some subset of the elements of the model and the elements of the mathematical description of the subsystem.5 The subsystem is described mathematically such that there are distinct states for distinct inputs of the processing model, distinct states for distinct outputs of the processing model, and a counterfactual mapping such that the sequence of states are arranged to preserve the input–output mapping that characterizes the processing model. If the subsystem is equivalent to the processing model, then there is such a mapping. In other words, the subsystem must satisfy a mapping for it to play a role in executing the processing model.6 Importantly, however, satisfying such a mapping, though necessary, is not sufficient to establish that a subsystem executes a processing model. There are at least two types of equivalence: strong and weak.7 Weak equivalence is a mapping of the inputs and outputs of the processing model on to the description of the cognitive subsystem. Strong equivalence occurs when the input, output, and mathematical relations between the inputs and outputs are mapped on to the subsystem.8 In cases of strong equivalence, not only are there distinct states for the inputs and outputs, but in addition, the way that the subsystem transforms the input into the output can be described using the very same mathematical relations that occur in the processing model. Assuming cognitive systems do execute processing models, strong equivalence is too strong a requirement. The mathematical functions describing the subsystems need not correspond to the functions that are present in the processing model. Consider the example of noisy perceptual decisions discussed earlier. Research into the neural mechanisms of such perceptual decision-making in the lateral intraparietal area (LIP)—a posterior region of the brain that helps control eye movements—reveals the presence of a dynamical pattern of neural activity involving integration of a neural signal to a boundary (Roitman and Shadlen [2002]; Gold and Shadlen [2007]). Integrative activity such as is seen in LIP during noisy perceptual decisions can be described using different mathematical functions, such as integration, exponentiation (a simple first-order differential equation), linear summation, or even using a prototypical ‘squashing’ function (Usher and McClelland [2001]; Wang [2002]; Mazurek et al. [2003]; Churchland [2012]; Eliasmith [2013]). None of these functions are precisely the same mathematical function as the sequential probability ratios calculated in the DDM, though they all share certain properties, such as the shape of the function, that capture some aspect of the integration process described by the model. In each such description, the DDM maps on to the subsystem, even though the mathematical descriptions utilize functions distinct from the ones in the model. A natural rejoinder notes that this mapping does not deductively imply that weak equivalence is necessary for executing a cognitive model. However, the inference is not a deduction about the necessity of weak equivalence. The grounds for the inference are inductive, based on the description of the subsystem, the processing model mapping on to the subsystem, and the explanatory power of the weak equivalence relation. Thus, strong equivalence is not required for a subsystem to execute a processing model. If the processing model is executed, then the model’s input–output sequence maps on to the subsystem states—that is, the processing model is weakly equivalent to the subsystem. Weak equivalence imposes a counterfactual constraint on cognitive function execution as well. Not only must it be the case that the processing model for the cognitive function is weakly equivalent to the subsystem, it must also be the case that if the inputs to the model were different, such that the processing model dictates a different set of values for the variables, then the subsystem would also be in a different set of states corresponding to those different values.9 Any number of different processing models will be weakly equivalent to the subsystems of cognitive systems; enforcing the counterfactual constraint restricts the range of alternative models weakly equivalent to the subsystems. In sum, a subsystem executes a model of a cognitive function only if there is a weak equivalence between the processing model and the subsystem executing the cognitive function. What else is needed to guarantee model execution by a subsystem? Weak equivalence is a necessary but not sufficient condition. Deeper discussion of this critical question, important not just for cognitive neuroscience but also for computer science and for a number of philosophical puzzles such as rule-following, is outside the purview of this article. However, we can add one further constraint to the necessary conditions for model execution, meant to guarantee that whatever further requirements arise in a mature analysis of the execution relation, these requirements will not conflict with the analysis of reuse provided below. This constraint is that whatever states of the subsystem are picked out as being weakly equivalent to the model, those states are not restricted to being weakly equivalent only to that model. In other words, there must not be a uniqueness requirement, such that the subsystem’s states are permitted to be weakly equivalent to only one processing model.10 Having provided a sketch of what a cognitive function is and a necessary requirement on the systems that execute them, an analysis of reuse can now be formulated. Focusing on subsystems as the relevant entity, we can now analyse use as follows: U: System s is used in cognitive system S if s is a subsystem of S and s performs function F in S such that s’s performing F in S executes some processing model, M.11 Critically, note that the variable binding M in U is existential, not universal. U states that a system is used in a cognitive system if, in addition to being a subsystem of the system, the subsystem’s function or functions (in part) execute some processing model. Earlier, a necessary condition for model execution was weak equivalence: a mapping between the inputs and outputs of the model and the states of the executing system. To say that s’s performing F in S helps to execute the model is to say that ascribing function F to s in S denotes certain states of s to which (parts of) the processing model is (are) weakly equivalent. Not every state of s is relevant to executing some processing model; ascribing function F picks out the relevant states. This analysis of use clearly ties s’s functions to processing model M executed in part by s’s function. Using ‘use’ to pick out use in the sense of U, reuse is then: R: A subsystem, s, of S is reused if s is used for some processing model, M, s is also used for some processing model, M′, function F for M = function F′ for M′, and processing model M ≠ processing model M′. For reuse, function F that s performs for M must be the same function F that s performs for M′, while processing models M and M′ executed by s must not be identical. This flexibility makes the notion of reuse so appealing: the same type of thing performing the same function for some system is utilized to execute different processing models—that is, is used for distinct cognitive functions. In addition, the requirement for two distinct processing models explains the added constraint on any further analysis of model execution, namely, that such an analysis must not restrict execution to a single formal model. To briefly illustrate reuse, consider the following two examples.12 The anterior cingulate cortex (ACC), a medial prefrontal cortical region, is implicated in numerous cognitive functions including conflict detection (MacDonald et al. [2000])—when subjects are presented with conflicting sensory cues for making a decision—as well as motivation (Devinsky et al. [1995]; Holroyd and Yeung [2012])—lesions to this area result in reduced motor activity and speech (Németh et al. [1988]) and patients exhibiting compulsions have hyperactivation in the region (Maia et al. [2008]). ACC is hypothesized to execute executive functions related to cognitive control (for example, Paus [2001]; Shenhav et al. [2013]), capturing the region’s role in many different cognitive functions. Thus, the ACC (s) executes executive control (F) for processing conflict (M) and for motivation (M′). The superior colliculus (SC) is a structure in the midbrain that controls eye movements (Sparks [1986]; Lee et al. [1988]) as well as attention (Krauzlis et al. [2013]) and contains a topographic map of visual space. In both cases, the level of activity of an SC neuron reflects the relative contribution of that neuron’s preferred location in visual space to the determination of the target of the eye movement or the focus of attention.13 Thus, a given neuron in the SC (s) increases in firing rate (F) both to determine the end point of an eye movement (M) and to determine the focus of attention (M′). The above analyses of use and reuse are explicitly functional, but what concept of function is this? This analysis of reuse requires that the system’s function be mathematically characterized. There are many extant approaches to function, such as causal role functions (Cummins [1975]) or selected effects functions (Neander [1991]). Prima facie, this requirement can be satisfied by causal role, selected effects, or some other notion of function. The approach to reuse presented here is neutral between these different concepts. Whatever notion of function is invoked, however, the function must be susceptible to mathematical description in such a way that it picks out a set of input–output pairs on to which the processing model can be mapped. Executing a cognitive function requires a mapping between the states picked out by the subsystem’s systemic function and the processing model reflecting the system’s cognitive function.14 This approach to reuse faces a challenge from learning.15 Learning can be thought of as changes internal to a cognitive system that result in the execution of some processing model. Prior to learning, however, the processing model was not being executed. Trivially, then, every instance of learning involves the system executing a new processing model. Thus, cognitive subsystems before learning execute different models from those after learning, and so every instance of learning seems to involve an instance of reuse. But if every instance of learning involves reuse, then not only is reuse rampant in cognitive systems, reuse is seemingly trivialized. While I happen to think that reuse is rampant in cognitive systems, and that there are good reasons (including learning) for thinking that reuse is rampant, I don’t think that learning is the only reason to think that reuse is rampant. Furthermore, I don’t think every instance of learning involves reuse, and so I don’t think reuse is trivial. Note that at least two distinct types of learning relevant for this discussion: Learning could involve a shift from disorganized behaviour (where the processing model, insofar as there is such, results in random behaviour) to organized behaviour. Call this ‘model acquisition learning’. Learning could also involve a shift from organized behaviour to organized behaviour (where the two processing models are not identical). Call this model ‘revision learning’. The same systemic function is not reused in every instance of learning. In model acquisition learning, agents shift from disorganized to organized behaviour. The resulting organized behaviour could result from the execution of a processing model used in a different domain for the new domain. In executing the model in the new domain, the same systemic functions used in previous executions may be used in the new execution of the model. But the functions used in executing the new model may be distinct from whatever functions were used prior to learning to generate the disorganized behaviour. Thus, the processing models are distinct, but so too may be the underlying functions used in executing that processing model. Since R requires the subsystems to perform the same function, cases where the underlying functions are distinct would not count as an instance of reuse. Alternatively, the agent could be acquiring a new processing model instead of importing one from a different domain. The new model may rely on systemic functions that were not used prior to learning. Every cognitive system possesses a wealth of potential systemic functions, not every one of which is used for a cognitive function. For example, the same state of the physical mechanism may contribute different subsets of its properties to the subsystems that execute the cognitive function. On my view, nothing about the system per se needs to change to adapt the systemic functions to the processing problem. Functional differences can arise from different states of the very same subsystem aiding in the execution of some processing model.16 Again, distinct functions for the subsystem prevent such examples as counting as instances of reuse. In reply, the objector may argue that though distinct systemic functions may be used in model acquisition learning, those functions are all possessed by the system. Since they are all possessed by the system, the objector infers that from the system’s perspective, reuse is occurring. First, this reply may not be correct: systems may be able to acquire entirely novel systemic functions (for example, in the case of the brain, through neuronal development or plasticity). Second, this reply conflicts with the analyses of use and reuse above. Recall that for use in the sense of U, if a subsystem in the system is used, it must help in the execution of a processing model for some cognitive function. Subsystems may possess latent systemic functions that are not being utilized in model execution. Absent such a role in model execution, these subsystems are not used. Suppose as a result of some learning process, an unused subsystem is recruited to aid in the execution of a processing model. Then, one of its systemic functions will be used. But before this point, they are not; use is a prerequisite for reuse, and so reuse cannot occur. In model revision learning, the system revises the model being executed, such that before learning, one model was executed, and after learning, a distinct model is executed. (A simple example of this may be arithmetic: There are numerous ways to multiply and divide numbers, all of which provide the same results. A system may start with one method, and then come to learn another.) This too need not result in reuse. While some set of subsystems will execute each model, both before and after learning, there is no requirement that these sets share members (or overlap at all). Furthermore, there is no requirement that the functions used in executing the learned models be drawn from the set of functions used to execute some model or other. The functioning of the system may be such that any number of used and unused functions may exist in the system, and these functions serve as a storehouse to be called upon to execute processing models. Without the same function for the subsystem, such cases do not count as instances of reuse. Thus, in sum, neither model acquisition learning nor model revision learning entails reuse, and so a fortiori nothing about learning per se trivializes reuse. Though, again, I believe that learning may often result from reuse, and I believe that reuse is rampant in cognitive systems. This analysis of reuse allows us to differentiate reuse from multi-use. Use occurs when a subsystem’s function results in the subsystem helping to execute a formal cognitive model. Reuse occurs when the subsystem performs the same function but helps to execute different processing models. Multi-use, in contrast, occurs when some other subset of a subsystem’s states is used for executing some other processing model. In that case, the subsystem is the same, even though different aspects of it execute the processing model. Thus, subsystems can be multi-used on this analysis of reuse. If a subsystem performs different functions for executing the same or different processing model, then it is multi-used: MU: The use of subsystem s is an instance of multi-use if s is used at least twice, and for at least two instances of use of s, function F1 for one use and function F2 for another use are such that F1 ≠ F2. We can now see how multi-use and reuse are distinct for subsystems. Both turn on the subsystem aiding in the execution of a processing model by performing some function. Precisely which function is performed matters for distinguishing multi-use from reuse. In multi-use, different functions are performed by the subsystem, regardless of whether or not the subsystem is used for executing the same or different processing models. So long as different functions are performed by the subsystem, then the subsystem is being multi-used sensu MU. In reuse, the subsystem is performing the same function but executing different processing models.17 The analysis of reuse R satisfies some of the criteria for adequacy for an analysis of reuse adumbrated above. Recall that the first criterion for adequacy required that an analysis specify what precisely is being reused. In the analyses of use and reuse above, subsystems are being reused. Framing the analysis in terms of subsystems was deliberate, leaving room for more specific entities. Furthermore, these subsystems need not be neural systems, permitting non-neural substrates for cognitive systems. Thus, reuse in the sense outlined above somewhat satisfies the first criterion, while providing flexibility for different types of subsystems. Recall that the second criterion for adequacy required that use should be analysed in terms of function and be conceptually distinct from multi-use. Use and reuse were analysed in terms of functions executing processing models. And, as just discussed, multi-use and reuse are analysed differently on the present theory. Thus the present analysis satisfies the second criterion. The third criterion required that the concept of reuse should be distinct from redundancy and duplication. Redundancy occurs when a back-up system waits to take over should the original malfunction. Duplication occurs when a system consists of two or more duplicate entities that perform the same function; these duplicate entities would seem to be instances of reuse for each other. Both concepts are distinct from reuse as analysed here. The duplicate entities explicitly perform the same function for executing the same model. On R, this is no longer reuse. And redundant entities likewise perform the same function for executing the same model, and so are no longer classified as instances of reuse on R. Both objections arise from considering different kinds of over-determination of model execution. In the case of duplicate entities, the functions may be performed concurrently; whereas in the case of redundant entities, the functions are performed successively. For both, the same processing models are being executed, and this sameness of processing model violates the analysis of reuse. Finally, the fourth criterion requires empirical adequacy for an analysis of reuse. Empirical adequacy must here be understood relative to what is putatively reused. In my analysis of use and reuse, the subsystems of cognitive systems are the target, leaving room for more specific entities to be reused so long as they are consistent with the definition of a subsystem provided earlier. Next I will argue that case studies from cognitive neurobiology support the reuse of dynamical systems, dynamical subsystems of cognitive systems. 4 The Reuse of Dynamical Systems in Cognitive Systems Having laid out and motivated a general approach to use and reuse, I will now argue that recent research in cognitive neurobiology has revealed how dynamical subsystems of cognitive systems (henceforth, dynamical systems) are reused for different cognitive functions. Dynamical systems are sets of dynamical properties, properties of physical systems that change either over time or with respect to each other. For example, a basic property of a neuron is the rate at which it discharges action potentials, that is, the propagation of electrical signals. This firing rate is a dynamical property of a neuron. Described using the mathematical tools of dynamical systems theory, these systems are characterized mainly by functional properties that abstract away from the details of the physical mechanisms that implement the dynamics. Applying the analysis of reuse developed above, the dynamical systems reuse hypothesis is: D: Cognitive systems reuse the same dynamical systems to execute processing models. D states that dynamical systems are the relevant type of entity being reused. There are two general types of reuse as applied to dynamical systems, depending on whether the physical mechanisms implementing the dynamical system are the same or different. In both cases, dynamical systems execute different processing models, in line with D and R. The first type of reuse involves dynamical systems implemented by the same physical mechanism. Recall that a dynamical system is identified as a subset of the dynamical properties of the physical mechanism implementing it. Each instance of the physical mechanism that implements the dynamical system will possess the same suite of dynamical properties with which the dynamical system is identical. Hence, different instances of the same physical mechanism can yield different implementations of the same dynamical system. A clear example of a reused dynamical system is the integrate-to-bound system. To illustrate the role of the integrate-to-bound system, recall the research programme, discussed above, exploring how animals make perceptual discriminations in noisy conditions. This research programme investigates the perceptual and decision processes for determining the direction of motion of a field of dots using the random dot motion task (RDMT).18 In a typical experiment, monkeys are presented with a visual display of moving dots, only some fraction of which move in a particular direction (‘motion strength’ in Figure 1), for example, either left or right. The noisiness in the signal is due to the fraction of dots that are not moving in the same direction (that is, are not moving coherently), and different fractions of dots move coherently on different trials. Recall that the processing model for this task, the DDM, involves the integration of evidence from a starting point—the prior probabilities of the direction of motion—until some stopping point is reached, such as a decision threshold or the cessation of the stimulus. Figure 1. View largeDownload slide Average recorded firing rates from cells in the parietal cortex of monkeys while they make a perceptual decision in different evidential conditions. Adapted from (Roitman and Shadlen [2002], p. 9482). Figure 1. View largeDownload slide Average recorded firing rates from cells in the parietal cortex of monkeys while they make a perceptual decision in different evidential conditions. Adapted from (Roitman and Shadlen [2002], p. 9482). The LIP, an eye movement control region in the posterior parietal cortex, contains cells that exhibit ramping-up activity preceding a decision. This recorded electrical activity from LIP possesses three relevant dynamical properties (Figure 1). First, these cells have a ‘reset’ point or baseline firing rate just after motion onset (time 0 in the plot on the left in Figure 1). Second, the activity of cells in LIP varies with the motion strength of the stimulus, with steeper increases in firing rate for stronger evidence (the different coloured lines in both plots). Third, the activity of these cells converges on a common firing rate across different motion conditions just prior to the animal initiating a saccade (time 0 in the plot on the right). This pattern of activation partly constitutes an integrate-to-bound dynamical system: a baseline or starting point, a period of integration, and a threshold or boundary. This pattern of dynamics maps on to the DDM processing model, as alluded to above. The DDM consists in an evidence sampling and summation process that starts with a prior log-odds ratio, samples the evidence to form a likelihood ratio, and then adds this to the prior to obtain a posterior log-odds ratio. This summation proceeds until some stopping criterion, such as a threshold, is reached. First, the DDM starts with a prior log-odds ratio that is the same across different strengths of evidence, just as LIP cells exhibit a baseline and reset at the beginning of presentation of the motion stimulus. Second, the sequential evidence sampling process in the DDM is mapped on to the differential increases in activity in LIP cells encoding the strength of the motion stimulus. Finally, third, the DDM evidence sampling process halts when a threshold is reached, the same across different strengths of evidence, similar to how LIP cells rise to a common value across different motion conditions. Thus, the DDM maps on to the integrate-to-bound system in LIP. Integration of evidence and a concomitant integration function in neurons has been taken to underlie a variety of perceptual decisions, not just motion discrimination, including disparity discrimination (Uka and DeAngelis [2003]), olfactory discrimination (Uchida and Mainen [2003]; Kepecs et al. [2006]; Uchida et al. [2006]) and vibrotactile perceptual decisions (Romo and Salinas [2001]; Hernandez et al. [2002]), amongst others. The repeated instances of the integrative activity at the neuronal level suggest a similar biophysical mechanism and, as a result, the implementation of the same dynamical system. Of course, there are biophysical differences between neurons in different areas, so the physical mechanisms are similar but not identical. These similar mechanisms, though, are able to produce equivalent dynamics resulting in the implementation of the same dynamical system. This is perhaps not surprising. The problem facing the organism is similar, even if the information is coding for a different modality (sound, vision, somatosensation, and so on). This cognitive function requires the sampling and integration of evidence, albeit different kinds of evidence, over time. For each instance of a perceptual decision executed in a different modality, is the same or different processing model executed? This depends on how finely we individuate our processing models. In the discussion of the execution of processing models above, I stated that the processing model must be interpreted, otherwise the properties of the environment that the animal tracks and the cognitive function the model describes both remain ambiguous. Insofar as such an interpretation assigns different referents to the variables encoding the integration of evidence, then different functions are being executed, and the integrate-to-bound system is being reused. In the use of dot motion, the variables stand for visual motion evidence. For other sorts of sensory evidence, the variables would stand for the evidence in the respective sensory domains. However, consideration of the neuronal mechanisms suggests that these neurons may merely be integrating neuronal signals. So, perhaps a case could be made for these variables encoding ‘neuronal evidence’ of a sort. To reinforce this conclusion of reuse of the integrate-to-bound dynamical system, consider another function that is executed by LIP neurons, related to action execution. One stereotypical pattern of firing commonly seen in LIP is a ramp-up in firing prior to executing a saccade (Barash et al. [1991]). In the delayed-saccade task, the animal fixates a centrally presented target while a peripheral target flashes on the screen for a variable amount of time and then extinguishes. The animal is cued to make a saccade by the disappearance of the central fixation target (the ‘go’ cue). Subsequent to the go cue, ramp-up activity, similar to what is seen during the RDMT, is observed in LIP cells (see Figure 2). Here, following the go cue, this LIP cell exhibited a stereotypical increase in firing rate prior to the saccade. The integrative activity prior to a saccade seen in the delayed-saccade task is common for a subset of LIP cells (examples abound; see, for example, Platt and Glimcher [1997]; Gottlieb et al. [1998]; Louie et al. [2011]). Figure 2. View largeDownload slide Pre-saccadic increase in activity in an LIP cell. Adapted from (Barash et al. [1991], p. 1100). Figure 2. View largeDownload slide Pre-saccadic increase in activity in an LIP cell. Adapted from (Barash et al. [1991], p. 1100). What is the processing problem the organism faces? Fundamentally, the system needs to make a decision about when to move. In the delayed-saccade task, the system must remember the location of the target and then initiate the movement. One processing model for eye movement initiation is a rise-to-threshold model, also called the LATER (linear approach to threshold with ergodic rate) model (the model can be applied to other types of movements as well; Carpenter and Williams [1995]; Schall and Thompson [1999]; Reddi and Carpenter [2000]). The LATER model ‘postulates a decision signal S associated with a particular response. When an appropriate stimulus appears, S starts to rise linearly from an initial level S0 at a rate r; upon reaching a pre-specified threshold ST, the saccade is triggered’ (Reddi and Carpenter [2000]). The integrate-to-bound dynamics observed over many different tasks in LIP can be taken to execute the LATER model for movement initiation. The integrate-to-bound system is reused for the RDMT and the delayed-saccade task. In the RDMT, the integrate-to-bound dynamics in LIP is taken to execute the DDM. In the delayed-saccade task, the integrate-to-bound dynamics in LIP is taken to execute the LATER model. The objection levelled above, for noisy perceptual decisions in different modalities, here maintains that these are not in fact different processing models. But this is simply false. The integration period in the DDM has a clear evidential interpretation that is absent in the integration period in the LATER model. As discussed previously, the mathematical model and the interpretation of the model variables, the external or internal properties being encoded, determine the processing model. There is no evidential integration period in the processing problem facing the animal in the delayed-saccade task. Thus, the processing models executed to meet the processing demands on the system are different models.19 Note that a particular dynamical system might play a role in executing multiple processing models simultaneously. The mapping requirement in R and D is not exclusive. The dynamical properties corresponding to the functions performed by a subsystem that play a role in executing some processing model can be mapped on to some model, M, while also being mapped on to some other model, M′. Such a case would constitute synchronic reuse. Importantly for the first type of reuse, the mere reoccurrence of a physical mechanism does not entail the implementation of the same dynamical system. Physical mechanisms possess many dynamical properties, and different dynamical systems can be implemented by the same physical mechanism in virtue of this diversity of dynamical properties. Granted this sort of flexibility, simply detailing the physical mechanism doesn’t sufficiently describe the physical mechanism’s cognitive contribution to the system. That contribution has to be determined by looking at the processing model the system is executing and then establishing how that execution occurs via the system’s dynamics. Since physical mechanisms have an array of dynamical properties, one processing model may be executed by one subset of dynamical properties while another model is executed by a different subset. Hence, different processing models can be executed by different dynamical systems implemented on the same physical mechanism. The second type of reuse describes the same dynamical system implemented by distinct physical mechanisms. Recall that dynamical systems are identical to subsets of the dynamical properties of physical mechanisms. These physical mechanisms are individuated according to the standards and kinds of the relevant science. In the case of neuroscience, physical mechanisms are individuated on the basis of neural properties such as types of neurons, types of neurotransmitters, connectivity profiles, response properties, and so forth. Distinct types of neuronal mechanisms may share some of their dynamical properties while not sharing others. Some such sets of dynamical properties may in fact constitute identical dynamical systems in virtue of being the same pattern of changes in the underlying mechanisms. Thus, type-distinct physical mechanisms can possess identical subsets of their dynamical properties, implementing the same dynamical system. Other accounts of reuse can’t capture these instances of reuse because they focus on the physical systems.20 A clear example of this can be seen by contrasting the case study of perceptual decisions under noise, illustrated above, with a case study drawn from research into strategic foraging decisions. The case study concerns a class of decisions regarding when an animal should depart a depleting food source. When foraging in an environment where rewards are clustered in patches, animals must determine when to leave the current, depleting patch to travel to a new one. A mathematical model, the marginal value theorem (MVT) (Charnov [1976]), describes the processing necessary for optimal patch-leaving decisions. The MVT determines the energy intake rate as a function of the value associated with the food item, the handling time for consuming the item, the average travel time between patches, and other environmental variables. Maximizing this rate results in a simple decision rule for leaving a patch: leave a patch when the instantaneous energy intake rate in the current patch falls below the average intake rate for all patches in that environment. In order to investigate the neural mechanisms of patch leaving decisions, Hayden and colleagues devised a simulacrum of the patch leaving problem suitable for neural recording (the patch-leaving task in Figure 3). The animal must make a decision about whether to continue to exploit the current patch or to leave the current patch. If the animal chooses to exploit the current patch, they receive a juice reward, which decreases in size as they repeatedly choose to exploit the same patch. If the animal chooses to leave the current patch, a travel time-out penalty is enforced; but at the end of this delay, the reward for the patch resets to its initial value. Figure 3. View largeDownload slide The patch-leaving task. Adapted from (Hayden et al. [2011], p. 934). Figure 3. View largeDownload slide The patch-leaving task. Adapted from (Hayden et al. [2011], p. 934). Recording neurons in the anterior cingulate cortex sulcus (ACCs), a medial prefrontal cortical structure, Hayden and colleagues uncovered a novel neurophysiological implementation of the integrate-to-bound system. The increase in patch residence time as the monkeys foraged in a patch is encoded by an increase in the peri-saccadic peak response in ACCs neurons (see, for example, neuron in Figure 4). The firing rate just prior to a decision rises over the course of the patch, akin to an integration. For similar travel times to a new patch, the firing rates in those neurons also rose to a common threshold for different patch leave times. Furthermore, for similar travel times, the initial firing rates at the beginning of the patch were the same. All three elements—of baseline, integration, and threshold—present in the LIP implementation of an integrate-to-bound system are also present in the ACC data collected during the foraging task, suggesting the same system is implemented in both regions. Figure 4. View largeDownload slide Example neuron from the anterior cingulate cortex, recorded during the patch-leaving task. These cells exhibit a peri-saccadic response that increases over the duration of foraging in a patch. Adapted from (Hayden et al. [2011], p. 935). Figure 4. View largeDownload slide Example neuron from the anterior cingulate cortex, recorded during the patch-leaving task. These cells exhibit a peri-saccadic response that increases over the duration of foraging in a patch. Adapted from (Hayden et al. [2011], p. 935). Despite implementing the same dynamical system, the physiological mechanisms are different in LIP and ACC. In the case of LIP and the RDMT, integrative activity occurring on the timescale of hundreds of milliseconds during a single trial is implemented by an increase in firing by individual neurons. In the case of ACC and the patch-foraging task, integrative activity occurring on the timescale of tens of seconds over many trials is implemented by an increase in the peak level of transient activation of individual neurons during a trial. These two different time courses indicate that distinct physiological mechanisms implement the integrate-to-bound system. The case studies of perceptual decision-making under noise and of strategic foraging decisions illustrate this second type of reuse. Different instances of the same type of dynamical system, the integrate-to-bound system, are implemented by distinct types of physical mechanisms: a temporally continuous increase in firing rates in LIP during the RDMT and a temporally discontinuous increase in firing rates in ACC during the patch-leaving task. The integrate-to-bound system has also been seen in prefrontal cortical areas at the level of the neuronal population, with the population exhibiting integration through its state space (Mante et al. [2013]). Thus, different physical mechanisms, at the level of the single neuron or neuronal population, can implement the same dynamical system. The processing models being executed, however, are distinct. In the case of LIP, the system executes an integration of evidence function. The specific function executed by the dynamical system in ACC remains unclear, though among the possibilities are assessments of opportunity cost, and a comparison of instantaneous-reward intake rates to average-reward intake rates. These two case studies illustrate the execution of distinct cognitive functions—in this case, perceptual and strategic decisions—through the repeated implementation of the same type of dynamical system. An objection to this second type of reuse of dynamical systems might be raised on the definition of a subsystem.21 The second type of reuse describes distinct instances of the same dynamical system that are implemented by distinct physical mechanisms. R requires that the same subsystem executes the cognitive function. If the substrates for these dynamical systems are different, then how can the dynamical system be the same? The second type of reuse satisfying D does not appear to be reuse at all; rather, it appears to be a banal instance of use. In reply, the objection mistakes the entity of reuse. The entities being used here are not the entire physical substrate. Rather, a subset of the dynamics of the substrate determines the identity of the dynamical system. D claims that dynamical systems, and not their physical substrates, are being reused, where dynamical systems are identified with subsets of parts and processes of physical mechanisms. In particular, dynamical systems reflect the processes in which the parts of physical mechanisms engage.22 Since dynamical systems are defined in terms of subsets of the parts and processes of the physical substrate, amounting to equivalence classes of physical substrates, the systems being used are of the type corresponding to that equivalence class. This discussion points to an ambiguity in R that arises for the case of dynamical subsystems. As defined above, subsystems are sets of parts and processes. In past discussions, the reused subsystem has been interpreted as denoting a set of parts. In this first sense of reuse, the same set of parts performs the same function for executing distinct cognitive functions. Consider again neurons in area LIP. Neurons in LIP encode quantity and value. That is, the activation (F) of these neurons (s) correlates with the number of items in a set (M) (Roitman et al. [2007]), and the activity (F) of these neurons (s′) in a different context correlates with the value of a choice (M′) (Platt and Glimcher [1997]). In fact, these neurons are taken to encode a number of other cognitive variables, not just number and value, but also shape (Janssen et al. [2008]) and time (Janssen and Shadlen [2005]). LIP neurons are reused for a diverse range of cognitive functions. This first sense of reuse is the one that has received the most attention in the literature (for example, Anderson [2007a], [2007b], [2007c], [2010], [2014]; McCaffrey [2015]). The explanatory power of this sense of reuse derives from its suggestion that a system's pre-existing subsystems can be adapted for new uses. As Anderson ([2010], p. 246) notes, ‘evolutionary considerations might often favor reusing existing components for new tasks over developing new circuits de novo’. This sense of reuse is close ‘exaptation’ in the biology literature (Gould and Vrba [1982]). Structures are exapted in biology when they acquire a function distinct from the function for which the structure was selected. Unlike exaptation, however, subsystems are reused in this first sense when they retain their pre-existing function. The function of the subsystem is instead put to a new use for a distinct cognitive function. In this first sense of reuse, subsystems retain their function, but they are used for different overall capacities of the system. Second, a set of processes could be reused. In this second sense, the same set of processes, potentially possessed by different physical entities, performs the same function for executing distinct cognitive functions. A set of processes can recur with different parts underlying the processes. Subsystems are defined as sets of parts and processes; but there is no restriction against identifying subsystems solely on the basis of just a set of parts or just a set of processes. The most familiar instances of such reuse can be drawn from computer programming: different programs may contain distinct lines of code or distinct subprograms that nonetheless possess the same properties in virtue of performing the same operation, such as sorting values from lowest to highest. This second sense of reuse has been overlooked in discussions of reuse, yet is crucial for understanding the reuse of dynamical systems in cognitive systems.23 The reuse of the integrate-to-bound system for evidence integration in perceptual decision-making and for foraging computations in strategic decision-making well illustrates this second sense of reuse. Unlike the first sense of reuse, the explanatory power of this second sense does not necessarily derive from exaptation, but rather from the functional usefulness of the set of processes for executing some systemic function. Whereas on the first sense, reuse results from exaptation, in the second sense, reuse results from something like evolutionary convergence (McGhee [2011]). Often, convergence in biology describes similarity of structure; in the case of this second sense of reuse, it describes similarity of process. For the second sense of reuse, the similarity is in the functional properties of subsystems, how the subsystem changes over time, and how the component processes—the processes engaged in by the parts whatever those parts happen to be—are arranged. For example, the execution of both the evidence integration processes and foraging computations involve a baseline level of activity, followed by a period of integration, and culminate in a threshold. Because these properties are often described independently of their physical substrates, the same set of functional properties can be associated with distinct physical structures. This discussion suggests that dynamical systems can be described in two different ways that entail distinct types of reuse. On the one hand, if the dynamical subsystem includes the physical parts as well as the processes they engage in, then these systems may be reused in the first sense. On the other hand, if the dynamical subsystem includes the processes engaged in by the physical parts, whatever those may be, but excludes the parts themselves, then these systems may be reused in the second sense. In addition to the different compositions of the subsystem, these two types of reuse reflect different evolutionary justifications. The first sense of reuse justifies claims of reuse on exaptationist grounds. The second sense of reuse justifies claims of reuse on convergentist grounds. The latter type of reuse also suggests that certain processes possess particularly useful functional properties for executing cognitive functions such that these processes will tend to recur in cognitive systems. In sum, regardless of the identity of the physical mechanisms that implement the dynamical system, the same dynamical system may execute different processing models at different times. The same dynamical system is observed in area LIP executing processing for eye movements, perceptual decisions, and more (for example, for categorization, see Freedman and Assad [2006]). Furthermore, these are not instances of the same processing model, just for superficially different task environments, as was objected previously. And the same dynamical system in different areas can execute different models, as is the case for foraging and perceptual decisions. Thus, dynamical systems are reused to execute different cognitive functions. Before concluding, I would like to address an objection arising from a concern about the causal efficacy of dynamical systems. Earlier, I analysed the use and reuse of a subsystem in terms of that subsystem performing a function for a system. A natural interpretation of performing a function requires the possession of causal powers (for example, see Cummins [1975]). However, granted that dynamical systems are the relevant entity of reuse, a problem arises from considering the causal powers of such systems. What is a causal power? We might think of a causal power as the ability for something to stand in the cause–effect relation to another thing, specifically serving in the cause position of that relation. But this leads to a straightforward objection: dynamical systems can’t stand in causal relations qua dynamical system, as dynamical systems are token-identical to sets of dynamical properties of physical mechanisms, regardless of the physical entities whose dynamics are being analysed. But, the objection runs, only those physical entities can stand in causal relations.24 In reply, note that this problem arises for just about every function ascription where the entity to which a function is being ascribed is not at the level of the fundamental constituents of nature, supposing such a level exists. Consider the case of a heart. The function of a heart is to circulate blood, and is understood as the possession of a certain sort of causal power, namely, the causal power to circulate blood. But this causal power is also possessed by the aggregate organized components of hearts, that is, the ventricles, atria, cells, and so forth that compose a heart. Hearts, of course, are identical to this organized set of entities and activities, but that does not change the facts about causal powers. And similarly, the ventricles, atria, cells, and so forth are composed of more fundamental physical entities that will also possess the causal powers of the entities they compose. Causal powers ‘drain away’, to use Block’s ([2003]) phrase, and the causal powers of hearts (or ion channels or wings or …) will drain away to their fundamental physical constituents. I contend that dynamical systems are in the same conceptual position as physical mechanisms like hearts. Just as in the case of hearts, dynamical systems are identical to some set of physical properties, namely, dynamical properties. And, just as in the case of hearts, the more fundamental physical substrate possesses the causal powers corresponding to the ascribed function. So, while it may be the case that dynamical systems do not consist in the right sorts of properties to stand in causal relations, they are in the same conceptual position as other sorts of systems to which functions are ascribed, like hearts, albeit for different reasons.25 The problem has not gone away; however, it has been assimilated to a different, more general problem, and I’ll consider that progress enough. 5 Conclusion I have argued that an acceptable analysis of reuse will specify the objects of reuse; will account for the role of functions in use and reuse; will conceptually distinguish reuse and multi-use; will conceptually distinguish reuse, redundancy, and duplication; and will be empirically adequate. I analysed use in terms of a subsystem’s function in a cognitive system in executing a processing model, and reuse in terms of the same subsystem being used in executing distinct processing models. For any theory featuring reuse, using this analysis of reuse satisfies the second and third criteria. I then presented a specific application of this analysis of reuse for dynamical systems, using case studies drawn from cognitive neurobiology to illustrate different instances of reuse. I also distinguished two distinct types of reuse for dynamical systems and distinct evolutionary motivations for each type, grounded in whether the same parts or the same processes are being reused. Thus, my analysis of use and reuse and subsequent application of the notion of reuse satisfies all four criteria for an adequate analysis of reuse. Acknowledgements I would like acknowledge a number of pivotal influences that helped pull this article together. This article is a distant relative of a chapter of my dissertation at Duke University, Mindcraft: A Dynamical Systems Theory of Cognition, and many thanks to the various people who helped me in the process of putting together the dissertation. In philosophy, Alex Rosenberg, Felipe De Brigard, Karen Neander, Michael Anderson, and Michael Weisberg were all helpful. Walter Sinnott-Armstrong and Chris Peacocke are owed special thanks for their detailed criticisms. In neuroscience, Bill Newsome, Josh Gold, John Pearson, and Geoff Adams were all helpful. Michael Platt is owed special thanks for detailed criticism, as well as for keeping a philosopher around the laboratory. Finally, numerous anonymous reviewers provided many serious challenges over the course of the revisions for this article, and a deep thanks goes out to them. Footnotes 1 Note that while I focus on reuse for cognitive systems, the concept of reuse seems to be applicable to many different types of system: cognitive, computational, digestive, immune, and potentially many more. 2 Note that backups may not actively perform function F in system S, but that they do have the capacity to perform F in S. This connects to my analysis of use and reuse below. 3 These remarks are meant to be neutral about assimilating cognitive functions as causal role functions, selected effects functions, or some other type of function. 4 Pylyshyn ([1984]), amongst others, has extensively discussed such an assignment. Due to limitations on length, I won’t be able to delve into interesting questions such as how to make such an assignment, whether such models are fully representational, or whether there can be intensional as well as extensional interpretations of variables. 5 For ease of expression, in what follows, I will often elide the fact that the cognitive system’s subsystems execute only part of the processing model, and speak of these subsystems executing the processing model simpliciter. 6 Technically, and assuming that every input–output mapping defined by the processing model must be mapped on to some state of the system, the kind of mapping required is an injective homomorphism (a monomorphism). The details of the processing model are mapped on to the subsystem, but there can be residual detail in the subsystem that does not have a counterpart in the processing model. If some subset of the input–output pairs do not map on to states of the system, then the assumption is violated. Whether or not the system in fact executes the model in such a case will be sensitive to issues such as how many such pairs fail to map; whether there is some characteristic of the pairs, such as the level of grain or falling outside some interval, that fail to map; the particular behavioural challenge the processing model is meant to solve; and so on. In reality, of course, every processing model will determine some input–output pairs that fail to map precisely on to states of the system, and so resolving these issues will become important. 7 There is a range of equivalence relations of varying strength. Deeper investigation requires more room than provided here, but there are important outstanding issues to be resolved regarding model equivalence for executing cognitive functions. Many thanks to a referee for emphasizing this point. 8 Insofar as we think of a mathematical relation as simply a set of ordered n-tuples F such that F ⊂ ℜn where ℜn is the nth Cartesian product of the set of real numbers on itself, then there is no distinction between the two. However, this is only the case if our domain is the entire set of reals. This may hold for only some processing models, and this is not the case for subsystem states, unless there are uncountably many such states to map on to F. Also, though mathematical relations may be extensionally equivalent to such a set, still there may be intensional differences between the relation and the set of input–output pairs corresponding to the relation. 9 In actual cases of model execution, only a range of the model’s input–output pairs (presumably) at some level of grain will map on to the subsystem states. I leave aside these nuances. See also Footnote 6. 10 How reasonable is the denial of such a restriction? Prime facie, restricting to a single, unique model the range of models to which a set of subsystem states can be equivalent seems ad hoc and unjustified. There may be some sets of subsystem states that are in fact so restricted, but to build in such a restriction in principle—to include the restriction as part of the definition of weak equivalence—is surely far too strong. 11s may execute the processing model only in conjunction with other s′-subsystems of S performing functions F′, where s′ is not identical to s and F′ may be identical to F. I will often ignore this nicety. Also, the processing model must be one that reflects some cognitive function, and this will be assumed in the following. 12 Each of the following examples is selected because recent research suggests a single underlying function that contributes to distinct cognitive functions. Also, the processing models corresponding to each cognitive function are elided and the functions themselves used in their stead. I discuss a different example at much greater length below. 13 The population coding of the locus of attention or the endpoint of a saccade is a good deal more complicated than this but the details are not strictly relevant. 14 This distinction between cognitive function and systemic function appears in both the cognitive and biological literature. For example, Bock and von Wahlert ([1965]) distinguish form—the description of the material composition and arrangement of some biological feature—from function—the description of the causal properties of some feature arising from its form—and define the faculty as the combination of the form and function. Different faculties can result from the same form possessing different functions, prima facie outlining instances of reuse. Anderson ([2010]) and Bergeron ([2017]) draw a similar distinction. Bergeron argues that functional specification can proceed in two modes. The first mode regards the specification of a component’s cognitive role: ‘the function of a particular component […] specified relative to a cognitive process, or group of such processes, in which that component is thought to participate’ (Bergeron [2007], p. 181). The second mode regards the specification of a component’s cognitive working: ‘the component’s function […] specified relative to a cognitive process, or group of such processes, that it is thought to perform’ (Bergeron [2007], p. 181). Whereas I remain neutral on what types of functions these are, Bergeron assimilates the distinction between roles and workings to the distinction between teleological and non-teleological analyses. A teleological analysis ‘is one in which the functional aspect of a structure (or process) is specified relative to a certain end product (or goal) that the structure helps to bring about’; while in non-teleological analysis, ‘the functional aspect of a structure is specified relative to the particular working that is associated with this structure in any given context’ (Bergeron [2007], p. 182. As applied to cognitive phenomena, teleological analysis reflects cognitive role and non-teleological analysis reflects cognitive working. The distinctions between role and working, and teleological and non-teleological, prima facie seem orthogonal. Nothing in the concept of a cognitive working—of a component’s performing a function—eo ipso conflicts with or contradicts a teleological analysis. In particular, the cognitive working is in part determined by which model of processing the component systems of a system execute. Insofar as these systems were selected for in virtue of the role they play in implementing these models, a cognitive working can be given a teleological analysis. Likewise, cognitive roles can be given a non-teleological analysis. In particular, the role of a system in executing some processing model can be specified in terms of its contribution to the mapping that constitutes that execution. Anderson agrees that there is a distinction between the functions of local circuits, or circumscribed neural areas, and cognitive functions, invoking the language of ‘workings’ and ‘uses’ from Bergeron (Anderson [2010]). However, he does not take a stand on the type of function being invoked. On my account, the systemic function of a subsystem roughly corresponds to the subsystem’s workings, and the (part of the) processing model the subsystem executes roughly corresponds to the subsystem’s roles. 15 Many thanks to an anonymous reviewer for suggesting this objection. 16 Anderson makes a similar point, noting that ‘it could be that the dynamic response properties of local circuits are fixed, and that cognitive function is a matter of tying together circuits with the right (relative) dynamic response properties’ (Anderson [2010], p. 265). These dynamic response properties need not have been used before being recruited during the learning process to execute the new cognitive model. 17 This approach to multi-use results in something of a challenge: Different mathematical descriptions are available for the same function. Since there are different mathematical descriptions, does that entail that multi-use can occur simply by changing the mathematical description of the function attributed to the system? Multi-use should not result merely from redescription. The notion of systemic function is thus something more than the mathematical function that is part of the mathematical description of the systemic function. 18 See (Gold and Shadlen [2001], [2002], [2007]) for extensive discussion of this research. 19 The objector could persist, arguing that the integrative activity in the delayed saccade task reflects the evidence about the arrival of the go cue. Perhaps this is right, but only empirical investigation can confirm the hypothesis, perhaps by comparing different temporal circumstances for the go cue. Regardless, the presence of the go cue in this task did not reflect noisy evidence about a perceptual decision, so it still appears to be an instance of a distinct processing problem. 20 Many thanks to an anonymous reviewer for stressing this. 21 Many thanks to an anonymous reviewer for pointing out this potential confusion. 22 Note that this does not exclude all types of parts from being included in dynamical systems, only those that are part of the implementing substrate for the dynamical system. Dynamical systems could include dynamical parts, for example, or other types of functional parts, supposing such an analysis of these types of parts could be provided. 23 Many thanks to a referee for emphasizing the importance of this point. 24 This may not be true on some analyses of causation, such as Woodward’s ([2003]) interventionist approach. I think that, in this case, the objection carries force. 25 I think that a careful conceptual analysis of the relation between function ascription and causal role ascription might be fruitful here. In particular, the physical mechanism might possess the causal powers, but nevertheless the function is ascribed to the dynamical system. Compare the case of wings and flight: the causal powers might devolve on to the fundamental physical constituents of wings; nonetheless, the function of providing lift (or what-have-you) is ascribed to the wings. The difference, of course, is that wings (or hearts) are physical mechanisms—that is, mechanisms that in part denote physical entities—whereas dynamical systems do not. I maintain that this difference does not matter for ascribing functions. References Anderson M. L. [ 2007a ]: ‘ Evolution of Cognitive Function via Redeployment of Brain Areas ’, The Neuroscientist , 13 , pp. 13 – 21 . Google Scholar CrossRef Search ADS Anderson M. L. [ 2007b ]: ‘ The Massive Redeployment Hypothesis and the Functional Topography of the Brain ’, Philosophical Psychology , 20 , pp. 143 – 74 . Google Scholar CrossRef Search ADS Anderson M. L. [ 2007c ]: ‘ Massive Redeployment, Exaptation, and the Functional Integration of Cognitive Operations ’, Synthese , 159 , pp. 329 – 45 . Google Scholar CrossRef Search ADS Anderson M. L. [ 2010 ]: ‘ Neural Reuse: A Fundamental Organizational Principle of the Brain ’, Behavioral and Brain Sciences , 33 , pp. 245 – 66 . Google Scholar CrossRef Search ADS Anderson M. L. [ 2014 ]: After Phrenology , Oxford : Oxford University Press . Barash S. , Bracewell R. M. , Fogassi L. , Gnadt J. W. , Andersen R. A. [ 1991 ]: ‘ Saccade-Related Activity in the Lateral Intraparietal Area, II: Spatial Properties ’, Journal of Neurophysiology , 66 , pp. 1109 – 24 . Google Scholar CrossRef Search ADS Bergeron V. [ 2007 ]: ‘ Anatomical and Functional Modularity in Cognitive Science: Shifting the Focus ’, Philosophical Psychology , 20 , pp. 175 – 95 . Google Scholar CrossRef Search ADS Block N. [ 2003 ]: ‘ Do Causal Powers Drain Away? ’, Philosophy and Phenomenological Research , 67 , pp. 133 – 50 . Google Scholar CrossRef Search ADS Bock W. J. and von Wahlert G. [ 1965 ]: ‘ Adaptation and the Form–Function Complex ’, Evolution , 19 , pp. 269 – 99 . Google Scholar CrossRef Search ADS Carpenter R. and Williams M. [ 1995 ]: ‘ Neural Computation of Log Likelihood in Control of Saccadic Eye Movements ’, Nature , 377 , pp. 59 – 62 . Google Scholar CrossRef Search ADS Charnov E. L. [ 1976 ]: ‘ Optimal Foraging, the Marginal Value Theorem ’, Theoretical Population Biology , 9 , pp. 129 – 36 . Google Scholar CrossRef Search ADS Churchland P. M. [ 2012 ]: Plato's Camera , Cambridge, MA: MIT Press . Cummins R. [ 1975 ]: ‘ Functional Analysis ’, Journal of Philosophy , 72 , pp. 741 – 65 . Google Scholar CrossRef Search ADS Dehaene S. [ 2005 ]: ‘Evolution of Human Cortical Circuits for Reading and Arithmetic: The “Neuronal Recycling” Hypothesis’, in S. Dehaene, J.-R. Duhamel, M. D. Hauser and G. Rizzolatti (eds), From Monkey Brain to Human Brain , Cambridge, MA : MIT Press , pp. 133 – 57 . Devinsky O. , Morrell M. J. , Vogt B. A. [ 1995 ]: ‘ Contributions of Anterior Cingulate Cortex to Behaviour ’, Brain , 118 , pp. 279 – 306 . Google Scholar CrossRef Search ADS Eliasmith C. [ 2013 ]: How to Build a Brain: A Neural Architecture for Biological Cognition , Oxford : Oxford University Press . Google Scholar CrossRef Search ADS Freedman D. J. , Assad J. A. [ 2006 ]: ‘ Experience-Dependent Representation of Visual Categories in Parietal Cortex ’, Nature , 443 , pp. 85 – 8 . Google Scholar CrossRef Search ADS Gallese V. [ 2008 ]: ‘ Mirror Neurons and the Social Nature of Language: The Neural Exploitation Hypothesis ’, Social Neuroscience , 3 , pp. 317 – 33 . Google Scholar CrossRef Search ADS Gallese V. , Lakoff G. [ 2005 ]: ‘ The Brain's Concepts: The Role of the Sensory-Motor System in Conceptual Knowledge ’, Cognitive Neuropsychology , 22 , pp. 455 – 79 . Google Scholar CrossRef Search ADS Gold J. I. and Shadlen M. N. [ 2001 ]: ‘ Neural Computations That Underlie Decisions about Sensory Stimuli ’, Trends Cognitive Science , 5 , pp. 10 – 6 . Google Scholar CrossRef Search ADS Gold J. I. , Shadlen M. N. [ 2002 ]: ‘ Banburismus and the Brain: Decoding the Relationship between Sensory Stimuli, Decisions, and Reward ’, Neuron , 36 , pp. 299 – 308 . Google Scholar CrossRef Search ADS Gold J. I. , Shadlen M. N. [ 2007 ]: ‘ The Neural Basis of Decision Making ’, Annual Review of Neuroscience , 30 , pp. 535 – 74 . Google Scholar CrossRef Search ADS Gottlieb J. P. , Kusunoki M. , Goldberg M. E. [ 1998 ]: ‘ The Representation of Visual Salience in Monkey Parietal Cortex ’, Nature , 391 , pp. 481 – 4 . Google Scholar CrossRef Search ADS Gould S. J. and Vrba E. S. [ 1982 ]: ‘ Exaptation—A Missing Term in the Science of Form ’, Paleobiology , 8 , pp. 4 – 15 . Google Scholar CrossRef Search ADS Haugeland J. [ 1985 ]: Artificial Intelligence: The Very Idea , Cambridge, MA : MIT Press . Hayden B. Y. , Pearson J. M. and Platt M. L. [ 2011 ]: ‘ Neuronal Basis of Sequential Foraging Decisions in a Patchy Environment ’, Nature Neuroscience , 14 , pp. 933 – 9 . Google Scholar CrossRef Search ADS Hernandez A. , Zainos A. , Romo R. [ 2002 ]: ‘ Temporal Evolution of a Decision-Making Process in Medial Premotor Cortex ’, Neuron , 33 , pp. 959 – 72 . Google Scholar CrossRef Search ADS Holroyd C. B. , Yeung N. [ 2012 ]: ‘ Motivation of Extended Behaviors by Anterior Cingulate Cortex ’, Trends in Cognitive Sciences , 16 , pp. 122 – 8 . Google Scholar CrossRef Search ADS Hurley S. [ 2005 ]: ‘ The Shared Circuits Hypothesis: A Unified Functional Architecture for Control, Imitation, and Simulation ’, Perspectives on Imitation , 1 , pp. 177 – 94 . Janssen P. and Shadlen M. N. [ 2005 ]: ‘ A Representation of the Hazard Rate of Elapsed Time in Macaque Area LIP ’, Natural Neuroscience , 8 , pp. 234 – 41 . Google Scholar CrossRef Search ADS Janssen P. , Srivastava S. , Ombelet S. and Orban G. A. [ 2008 ]: ‘ Coding of Shape and Position in Macaque Lateral Intraparietal Area ’, Journal of Neuroscience , 28 , pp. 6679 – 90 . Google Scholar CrossRef Search ADS Jungé J. A. , Dennett D. C. [ 2010 ]: ‘ Multi-use and Constraints from Original Use ’, Behavioral and Brain Sciences , 33 , pp. 277 – 8 . Google Scholar CrossRef Search ADS Kepecs A. , Uchida N. , Mainen Z. F. [ 2006 ]: ‘ The Sniff as a Unit of Olfactory Processing ’, Chemical Senses , 31 , pp. 167 – 79 . Google Scholar CrossRef Search ADS Krauzlis R. J. , Lovejoy L. P. , Zénon A. [ 2013 ]: ‘ Superior Colliculus and Visual Spatial Attention ’, Annual Review of Neuroscience , 36 , pp. 165 – 82 . Google Scholar CrossRef Search ADS Lee C. , Rohrer W. H. , Sparks D. L. [ 1988 ]: ‘ Population Coding of Saccadic Eye Movements by Neurons in the Superior Colliculus ’, Nature , 332 , pp. 357 – 60 . Google Scholar CrossRef Search ADS Louie K. , Grattan L. E. , Glimcher P. W. [ 2011 ]: ‘ Value-Based Gain Control: Divisive Normalization in Parietal Cortex ’, The Journal of Neuroscience , 31 , pp. 10627 – 39 . Google Scholar CrossRef Search ADS MacDonald A. W. , Cohen J. D. , Stenger V. A. , Carter C. S. [ 2000 ]: ‘ Dissociating the Role of the Dorsolateral Prefrontal and Anterior Cingulate Cortex in Cognitive Control ’, Science , 288 , pp. 1835 – 8 . Google Scholar CrossRef Search ADS Machamer P. , Darden L. , Craver C. F. [ 2000 ]: ‘ Thinking about Mechanisms’ , Philosophy of Science , 67 , pp. 1 – 25 . Google Scholar CrossRef Search ADS Maia T. V. , Cooney R. E. , Peterson B. S. [ 2008 ]: ‘ The Neural Bases of Obsessive–Compulsive Disorder in Children and Adults ’, Development and Psychopathology , 20 , pp. 1251 – 83 . Google Scholar CrossRef Search ADS Mante V. , Sussillo D. , Shenoy K. V. , Newsome W. T. [ 2013 ]: ‘ Context-dependent Computation by Recurrent Dynamics in Prefrontal Cortex ’, Nature , 503 , pp. 78 – 84 . Google Scholar CrossRef Search ADS Marr D. [ 1982 ]: Vision , New York : Henry Holt . Mazurek M. E. , Roitman J. D. , Ditterich J. , Shadlen M. N. [ 2003 ]: ‘ A Role for Neural Integrators in Perceptual Decision Making ’, Cerebral Cortex , 13 , pp. 1257 – 69 . Google Scholar CrossRef Search ADS McCaffrey J. B. [ 2015 ]: ‘ The Brain’s Heterogeneous Functional Landscape ’, Philosophy of Science , 82 , pp. 1010 – 22 . Google Scholar CrossRef Search ADS Neander K. [ 1991 ]: ‘ Functions as Selected Effects: The Conceptual Analyst's Defense ’, Philosophy of Science , 58 , pp. 68 – 184 . Google Scholar CrossRef Search ADS Németh G. , Hegedüs K. , Molnâr L. [ 1988 ]: ‘ Akinetic Mutism Associated with Bicingular Lesions: Clinicopathological and Functional Anatomical Correlates ’, European Archives of Psychiatry and Neurological Sciences , 237 , pp. 218 – 22 . Google Scholar CrossRef Search ADS Paus T. [ 2001 ]: ‘ Primate Anterior Cingulate Cortex: Where Motor Control, Drive, and Cognition Interface ’, Nature Reviews Neuroscience , 2 , pp. 417 – 24 . Google Scholar CrossRef Search ADS Platt M. L. , Glimcher P. W. [ 1997 ]: ‘ Responses of Intraparietal Neurons to Saccadic Targets and Visual Distractors ’, Journal of Neurophysiology , 78 , pp. 1574 – 89 . Google Scholar CrossRef Search ADS Pylyshyn Z. W. [ 1984 ]: Computation and Cognition , Cambridge : Cambridge University Press . Reddi B. , Carpenter R. [ 2000 ]: ‘ The Influence of Urgency on Decision Time ’, Nature Neuroscience , 3 , pp. 827 – 30 . Google Scholar CrossRef Search ADS Roitman J. D. , Brannon E. M. and Platt M. L. [ 2007 ]: ‘ Monotonic Coding of Numerosity in Macaque Lateral Intraparietal Area ’, PLoS Biology , 5 , p. e208 . Google Scholar CrossRef Search ADS Roitman J. D. , Shadlen M. N. [ 2002 ]: ‘ Response of Neurons in the Lateral Intraparietal Area during a Combined Visual Discrimination Reaction Time Task ’, Journal of Neuroscience , 22 , pp. 9475 – 89 . Google Scholar CrossRef Search ADS Romo R. , Salinas E. [ 2001 ]: ‘ Touch and Go: Decision-Making Mechanisms in Somatosensation ’, Annual Review of Neuroscience , 24 , pp. 107 – 37 . Google Scholar CrossRef Search ADS Schall J. D. and Thompson K. G. [ 1999 ]: ‘ Neural Selection and Control of Visually Guided Eye Movements ’, Annual Review of Neuroscience , 22 , pp. 241 – 59 . Google Scholar CrossRef Search ADS Shagrir O. [ 2010 ]: ‘ Marr on Computational-Level Theories ’, Philosophy of Science , 77 , pp. 477 – 500 . Google Scholar CrossRef Search ADS Shenhav A. , Botvinick M. M. , Cohen J. D. [ 2013 ]: ‘ The Expected Value of Control: An Integrative Theory of Anterior Cingulate Cortex Function ’, Neuron , 79 , pp. 217 – 40 . Google Scholar CrossRef Search ADS Sparks D. L. [ 1986 ]: ‘ Translation of Sensory Signals into Commands for Control of Saccadic Eye Movements: Role of Primate Superior Colliculus ’, Physiological Reviews , 66 , pp. 118 – 71 . Google Scholar CrossRef Search ADS Uchida N. , Kepecs A. , Mainen Z. F. [ 2006 ]: ‘ Seeing at a Glance, Smelling in a Whiff: Rapid Forms of Perceptual Decision Making ’, Nature Reviews Neuroscience , 7 , pp. 485 – 91 . Google Scholar CrossRef Search ADS Uchida N. , Mainen Z. F. [ 2003 ]: ‘ Speed and Accuracy of Olfactory Discrimination in the Rat ’, Nature Neuroscience , 6 , pp. 1224 – 9 . Google Scholar CrossRef Search ADS Uka T. , DeAngelis G. C. [ 2003 ]: ‘ Contribution of Middle Temporal Area to Coarse Depth Discrimination: Comparison of Neuronal and Psychophysical Sensitivity ’, The Journal of Neuroscience , 23 , pp. 3515 – 30 . Google Scholar CrossRef Search ADS Usher M. , McClelland J. L. [ 2001 ]: ‘ The Time Course of Perceptual Choice: The Leaky, Competing Accumulator Model ’, Psychological Review , 108 , p. 550 . Google Scholar CrossRef Search ADS Wang X.-J. [ 2012 ]: ‘ Neural Dynamics and Circuit Mechanisms of Decision-Making ’, Current Opinion in Neurobiology , 22 , pp. 1039 – 46 . Google Scholar CrossRef Search ADS Wang X. J. [ 2002 ]: ‘ Probabilistic Decision Making by Slow Reverberation in Cortical Circuits ’, Neuron , 36 , pp. 955 – 68 . Google Scholar CrossRef Search ADS Wolpert D. M. , Doya K. , Kawato M. [ 2003 ]: ‘ A Unifying Computational Framework for Motor Control and Social Interaction ’, Philosophical Transactions of the Royal Society of London Series B , 358 , pp. 593 – 602 . Google Scholar CrossRef Search ADS © The Author(s) 2017. Published by Oxford University Press on behalf of British Society for the Philosophy of Science. All rights reserved. For Permissions, please email: journals.permissions@oup.com This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/about_us/legal/notices)

Journal

The British Journal for the Philosophy of ScienceOxford University Press

Published: Aug 28, 2017

There are no references for this article.

You’re reading a free preview. Subscribe to read the entire article.


DeepDyve is your
personal research library

It’s your single place to instantly
discover and read the research
that matters to you.

Enjoy affordable access to
over 18 million articles from more than
15,000 peer-reviewed journals.

All for just $49/month

Explore the DeepDyve Library

Search

Query the DeepDyve database, plus search all of PubMed and Google Scholar seamlessly

Organize

Save any article or search result from DeepDyve, PubMed, and Google Scholar... all in one place.

Access

Get unlimited, online access to over 18 million full-text articles from more than 15,000 scientific journals.

Your journals are on DeepDyve

Read from thousands of the leading scholarly journals from SpringerNature, Elsevier, Wiley-Blackwell, Oxford University Press and more.

All the latest content is available, no embargo periods.

See the journals in your area

DeepDyve

Freelancer

DeepDyve

Pro

Price

FREE

$49/month
$360/year

Save searches from
Google Scholar,
PubMed

Create lists to
organize your research

Export lists, citations

Read DeepDyve articles

Abstract access only

Unlimited access to over
18 million full-text articles

Print

20 pages / month

PDF Discount

20% off