TY - JOUR AU - Paula, Teijeiro, AB - Abstract Adopting a non-classical logic may not imply resigning the classical theories that have proven their worth. Nevertheless, the project of classical recapture poses some challenges, some of them specific to paraconsistent approaches. In this article, we analyse the consequences of introducing a recovery operator to subvaluationist logic. We argue that the classical recovery can indeed be carried out in a subvaluationist setting, but that doing so amounts to committing to a hierarchy of recaptures. 1 Introduction: classical recapture The relation between classical and non-classical logic is strained by two conflicting forces. On the one hand, non-classical logics dispute some classical principles, thus presenting themselves as rival theories (pace logical pluralism). On the other hand, it is very usual for people defending them to try to argue in favour of their preferred choice in terms of its high degree of classicality. To what extent, then, should classical logic be taken as a friend and as a foe? There are, I think, two perspectives regarding this matter. A naïve answer would be that the ideal logic is classical, but since, unfortunately, we cannot have it—because of paradoxes, for instance—we have to settle for the next best thing. No one, in my knowledge, explicitly defends such an uncouth position, but sometimes this seems to be what is, e.g. behind those long lists of classically valid inferences that the rival proposal is proud to retain. The second option is to accept the fact that the right theory might be seriously divergent (it could even lack valid inferences altogether!), and to claim that the old arguments and laws might still be, in a way, materially valid, i.e. sound with respect to restricted areas of knowledge. That classical arithmetic is useful/simple/elegant says that there is something to be valued in the theory as a whole, but it says nothing regarding which parts of the theory are logical and which parts are not. So, we can take Excluded Middle or Explosion to be arithmetically valid principles and end up where we started, just with a different partition of the realm of logicality. I take this perspective to be much more interesting, as it takes the revision of logic seriously, and also seems to resolve the tension we mentioned earlier. Also, instead of trying to strengthen the underlying logic, one can embark in the process of what is called classical recapture, which is the recovery of classical reasoning for those restricted contexts. Classical recapture can be done in many ways, but if one considers Explosion or Excluded Middle to be a feature of, say, arithmetic, then it seems it should be part of the theory, in the same way other axioms or rules are. And since what counts as a theory can widely diverge from one author to another, and among different types of deviant logics, we are faced with many different sorts of challenges. We focus here on an issue related to one particular case: weakly paraconsistent logic. On the next section, we explain exactly what we mean by a paraconsistent logic. We also discuss logic of formal inconsistencies (LFIs), a class of paraconsistent theories which deal with the issue of having the resources to do the recapture inside the theory, by introducing some special vocabulary called consistency operators. Next, we present the family of S-valued logics, and in particular, our weakly paraconsistent logic, known as subvaluationism (SB). On Section 3, we see how, when we try to turn (SB) into a logic of formal inconsistency (LFI), we are faced with a problem which seems to jeopardize the very project of recovery. Finally, we present an alternative culprit that could exonerate the consistency operator. Throughout this paper, we work with a regular propositional language |$\mathcal{L}_P$|⁠, and one extension of it, |$\mathcal{L}_{P\circ }$|⁠, with a signature |$\varSigma _{P\circ }$| which incorporates a new unary connective |$\circ $|⁠. |$FOR_{\mathcal{L}}$| is the set of formulas of the language |$\mathcal{L}$|⁠, which can be defined recursively as usual. We take a logic to be a set of inferences, which in what is called the Set–Set framework are pairs of sets of sentences, and in what is called the Set–Formula framework are pairs where the second element is just a single sentence. We adopt here a purely model-theoretical point of view (not unlike a lot of the literature concerning S-valuationisms), which means we will take these logics to be characterized by classes of interpretations, instead of proof-theoretical systems.1 A valuation is a mapping |$\nu : FOR_{\mathcal{L}}\rightarrow \mathcal{V}$|⁠, where |$\mathcal{V}$| is some set of values. CL is the logic characterized by the set of valuations |$\nu : FOR_{\mathcal{L}}\rightarrow \{0,1\}$|⁠, restricted according to the usual truth-functions, with 1 as a designated value. Valuations are one way to interpret the languages, though not the only models we are using. When models satisfy the premises but not the conclusion we call them countermodels. We use the expression |$\varGamma \vDash _{L} \phi $| to mean that the argument from |$\varGamma $| to |$\phi $| is valid according to the models and consequence relation which characterize the logic L, and also the expression |$\varGamma \Vdash _{L} \phi $| to simply denote the argument from |$\varGamma $| to |$\phi $|⁠. Finally, we use |$\{\varGamma \}_p$| as a name for the set |$\{p:$|p is a subformula of some |$\gamma \in \varGamma \}$| and, for any unary operator |$\odot $| we use |$\odot \varGamma $| as a name for the set |$\{ \odot \gamma : \gamma \in \varGamma \}$|⁠. In the case of |$\odot \{ \varGamma \}_{p}$|⁠, take |$\odot $| as applied to |$\{ \varGamma \}_{p}$|⁠. 2 Paraconsistency Not all possible definitions of paraconsistency cut up the space of logical theories in the same way. The main idea they share is the rejection of the classical principle which states that contradictions trivialize the theories they appear in, sometimes known as Explosion. The problem is exactly how to formulate this idea. On the one hand, it can be presented as a principle (i.e. a schematic sentence), as a rule (i.e. a relation between schematic sentences), as a metarule (i.e. a relation between schematic rules), as a metametarule, etc. On the other hand, it can be expressed using different pieces of vocabulary—both at the object-language and at the metalanguage—such as conjunctions, commas, conditionals, etc. We will be focusing on the following case, probably the most usual one. Definition 2.1 (Paraconsistent Logic) A logic L is paraconsistent if |$\phi , \neg \phi \nvDash _{L} \psi $| for some formulas |$\phi $| and |$\psi $|⁠. The project of classical recapture then involves the rehabilitation of the validity of Explosion for certain kinds of sentences, which in turn gives rise to the problem of how to mark the privileged ones. As we said, we would like to do that inside the theory, so a first, natural thought might be to add something like a schematic principle of non-contradiction (PNC) for ‘classical’ predicates or propositions—i.e. to add |$\neg (\phi \wedge \neg \phi )$| for every non-problematic |$\phi $|⁠. Nevertheless, this might not be the best idea. In the first place, some paraconsistent logics actually validate the PNC. For example, one of the most popular logics in the field of paradoxes is LP, which may be presented with the following three-valued functions |$\nu : FOR_{\mathcal{L}}\rightarrow \{0,1, \frac{1}{2}\}$|⁠, where both 1 and |$\frac{1}{2}$| are designated values: The set of LP tautologies (in this case, understood as the sentences which never receive the value 0) coincides with the classical one, and hence, adding some instances of PNC as axioms serves no purpose: they were already valid. In the second place, the failure of Explosion might be motivated by a requirement of well-behavedness stricter than mere non-contradiction: ‘(…) there may be suspicious propositions not involving any contradiction. The information conveyed by |$\alpha $| may be very unlikely, or incoherent with respect to previous data represented by a set |$\varGamma $| of sentences, even though there may be no formal contradiction derivable from |$\alpha $| and |$\varGamma $| together. Carnielli and Coniglio’ [2] 2.1 LFIs LFIs are a really wide family of paraconsistent logics, which all have in common the fact that their languages contain connectives internalizing the consistency of sentences (see for instance [3]). What is meant by this internalization is that, although paraconsistent in the sense previously specified, LFIs have an operator |$\circ $|⁠, which allows us to recapture Explosion for some formulas. That is, they respect the Principle of Gentle Explosion or PGE. In more formal terms2: Definition 2.2 (LFI) An LFI is a paraconsistent logic which satisfies the following: For some |$\phi $| and |$\psi $| |$\circ \phi , \phi \nvDash _{LFI} \psi $| |$\circ \phi , \neg \phi \nvDash _{LFI} \psi $|⁠, and For every |$\phi $| and |$\psi $| |$\circ \phi , \phi , \neg \phi \vDash _{LFI} \psi \qquad$| (PGE). Contrary to the dialetheist tradition, the reason why authors working within LFIs maintain the invalidity of Explosion is the existence of epistemological contradictions in empirical theories and not of true contradictions—whose existence they find doubtful, although not impossible3. An epistemological contradiction would be a sentence for which we have evidence in favour of its truth as well as its falsity, compelling enough to drive us to accept both it and its negation, at least temporarily4. Nevertheless, in such a scenario, we would not be justified to accept any other sentences whatsoever. As an illustration, we can look at some particular cases of LFIs, starting by the minimal one based on classical logic, which is called mbC. It is in fact a system for a language with a signature |$\varSigma _{P\circ } \bigcup \{ \rightarrow \}$| but we take here just its implication-free fragment (in order to ease the comparison with LP). mbC can be characterized by the class of valuation functions |$\nu : FOR_{\mathcal{L} p\circ } \rightarrow \{0,1\}$|—where 1 is the designated value—restricted according to the following non-deterministic clauses: |$\nu (\phi \wedge \phi )=1 \Leftrightarrow \nu (\phi )=1$| and |$\nu (\psi )=1$| |$\nu (\phi \vee \phi )=1 \Leftrightarrow \nu (\phi )=1$| or |$\nu (\psi )=1$| |$\nu (\neg \phi )=0 \Rightarrow \nu (\phi )=1$| |$\nu (\circ \phi )=1 \Rightarrow \nu (\phi )=0$| or |$\nu (\neg \phi )=0.$| This fragment of mbC is a proper sublogic of LP: it is not the case that it validates every classical tautology, and in particular, there are valuations which falsify the PNC: The cells where we have a set instead of a determinate value represent the fact that some valuations output 1 and some output 0, for the corresponding |$\phi $|-input. This gap can be bridged by adding the following further restrictions (see [16]), which validate de Morgan and Double Negation by blunt force: |$\nu (\neg (\phi \vee \phi ))=\nu (\neg \phi \wedge \neg \phi ))$| |$\nu (\phi \wedge \phi )=\nu (\neg \phi \vee \neg \phi ))$| |$\nu (\neg \neg \phi )=\nu (\phi ).$| Now the truth value of PNC is made to coincide with the Law of Excluded Middle (LEM): |$\nu (\neg (\phi \wedge \neg \phi ))=\nu (\neg \phi \vee \neg \neg \phi )=\nu (\neg \phi \vee \phi )$|⁠. And LEM is tautological in mbC: The resulting logic, which is LP plus a consistency operator, is also an LFI. On the other hand, just as we can strengthen mbC’s negation, we can also strengthen its consistency operator. In particular, one of the operator’s weaknesses is the fact that it does not propagate, i.e. the consistency of some formulas does not guarantee the consistency of their compound. In order to achieve this propagation, one needs to add the following extra restrictions: |$\nu (\circ \phi )$| and |$\nu (\circ \psi )=1$||$\Rightarrow \nu (\circ (\phi \wedge \psi ))=1$| |$\nu (\circ \phi )$| and |$\nu (\circ \psi )=1$||$\Rightarrow \nu (\circ (\phi \wedge \psi ))=1$| |$\nu (\circ \phi )$| and |$\nu (\circ \psi )=1$||$\Rightarrow \nu (\circ (\phi \wedge \psi ))=1$| |$\nu (\circ \phi )$| and |$\nu (\circ \psi )=1$||$\Rightarrow \nu (\circ (\phi \wedge \psi ))=1$| |$\nu (\circ \phi )$| and |$\nu (\circ \psi )=1$||$\Rightarrow \nu (\circ (\phi \wedge \psi ))=1$| |$\nu (\circ \phi )$| and |$\nu (\circ \psi )=1$||$\Rightarrow \nu (\circ (\phi \vee \psi ))=1$| |$\nu (\circ \phi )=1$||$\Rightarrow \nu (\circ \neg \phi )=1$| |$\nu (\circ \phi )=1$||$\Rightarrow \nu (\circ \circ \phi )=1.$| The resulting logic is called mbCciw, but let us affectionately call it just mbC+. It is the smallest LFI where its consistency operator behaves so that |$\nu (\circ \phi )=1$||$\Leftrightarrow \nu (\neg \phi )\neq \nu (\phi )$|⁠. The usefulness of consistency operators goes beyond having a restricted version of Explosion: they serve to recover the full power of classical logic. There are two ways to do this. The first one is using it to define new operations which conform to the desired rules and principles. In this case, a strong negation |$\sim _{\beta }$| (relative to a formula |$\beta $| used to produce a bottom particle) can be defined as follows: |$\sim _{\beta }\phi \buildrel d \over = \phi \rightarrow (\beta \wedge (\neg \beta \wedge \circ \beta ))$|⁠. The other path is to have those rules and principles stated in the same language, but restricted to consistent contexts by the addition of extra assumptions. Although from the technical point of view both strategies are equivalent, there seems to be some philosophical weight in favour of the second one. The reason is that it is certainly obvious that Explosion is valid according to some operator; what is under dispute is whether our natural language negation validates it or not. Hence, it seems more coherent with the project, and with the idea that explosiveness is a material property, to add the consistency assumptions. As an example, the following is one way to recapture every classically valid argument in mbC+, according to the second strategy. First, we need both theories to be stated in the same language, or to offer a suitable translation between them. In this case, we will take CL|$\circ $| to be CL over the signature |$\varSigma _{P\circ }$| where |$\circ $| behaves sort of like a Top operator, in the sense that it maps everything to 1. The justification behind it is that, in classical languages, everything is well behaved, so asserting the consistency of a formula should always be a true statement. Given that Top is already definable in CL, there is a translation between CL and CL|$\circ $|, and we can safely say that CL|$\circ $| is classical logic. We will say an mbC+ model is classical with respect to |$\varGamma $| if it is not the case that both |$\phi $| and |$\neg \phi $| get value 1, for all |$\phi $| which is a subformula of a formula in |$\varGamma $|⁠. If a model |$\nu $| is classical with respect to some formulas, and |$\nu *$| is classical with respect to a superset of them, we say |$\nu *$| is an expansion of |$\nu $|⁠. Given that classical models are classical with respect to any set, and given that in every mbC+-model which is classical with respect to |$\varGamma $|⁠, all |$\circ \gamma $| formulas get value 1, the set of CL|$\circ $|-valuations turns out to be a subset of the mbC+-valuations. On the other hand, not all mbC+-models are classical ones. Hence, we first show two facts regarding the behaviour of consistency in mbC+-models, and then prove the recovery result: Fact 2.3 (Upwards Propagation of Consistency) If an mbC+-model |$\nu $| is classical with respect to some set of propositional letters, it is classical with respect to every formula containing only those letters. Proof. By induction on the complexity of the formula. Fact 2.4 (Expansion of Consistency) If an mbC+-model |$\nu $| is classical with respect to |$\varGamma $|⁠, then given a superset |$\varPi $| of |$\varGamma $|⁠, |$\nu $| can be expanded to a model |$\nu *$| equivalent to it with respect to |$\varGamma $|⁠, but which is classical also with respect to |$\varPi $|⁠. Proof. Let |$\nu $| be a model classical with respect to |$\varGamma $| and |$\varPi $| a superset of |$\varGamma $|⁠. For any |$p\in \{\varPi \}_{p} - \{\varGamma \}_{p}$|⁠, if |$\nu (p)=\nu (\neg p)$|⁠, change the value of |$p$| to 1 (0). Because of Upwards Propagation of Consistency, this model |$\nu *$| is classical with respect to |$\varPi $|⁠. Fact 2.5 (Classical Recapture) |$\varGamma \vDash _{CL\circ }\phi $| iff |$\circ \{\varGamma \}_{p}, \circ \{\phi \}_{p}, \varGamma \vDash _{mbC+}\phi $| Proof. Right to left is immediate by the inclusion of the set of CL|$\circ $|-models in those of mbC+. Left to right, we assume there is an mbC+-countermodel for |$\circ \{\varGamma \}_{p}, \circ \{\phi \}_{p}, \varGamma \Vdash _{mbC+}\phi $| i.e. a |$\nu $| such that |$\nu (\gamma )=1$| for every |$\gamma \in \varGamma $|⁠, |$\nu (\phi )=0$| and which is classical with respect to |$\{\varGamma \}_{p}$| and |$\{\phi \}_{p}$|⁠. Because of Upwards Propagation of Consistency, |$\nu $| is classical with respect to |$\varGamma $| and |$\phi $|⁠. Hence, because of Expansion of Consistency, it can be expanded to a |$\nu *$| which is equivalent to |$\nu $| with respect to |$\varGamma $| and |$\phi $| and also classical with respect to every formula. Thus, |$\nu ^{\ast }$| is a CL|$\circ $|-countermodel for |$\varGamma \Vdash \phi $|⁠. Even though there can be other sets of circled formulas that can also be sufficient to recover classical logic, this one seems to have at least enough philosophical motivation. On the one hand, it gives us a general recipe, which works for any argument of this logic. On the other hand, the set of propositional letters fixes, in a way, what the theory is about, and hence, if some kind of determinacy issue arises, it should be at the atomic level. 2.2 S-valuationist logics S-valuationist logics are a family of semantic theories, whose main application is the interpretation of languages containing vague predicates. SB was the first type of S-valuationist theory to appear in 1948 [12], by the name of ‘discussive logic’5. Supervaluationism (SG), on the other hand, is usually attributed to [8], and has probably been popularized by [7]. Among vagueness theorists, some notable defenders of S-valuationism include Rossanna Keefe (see [13], [14]), Stewart Shapiro (see [17]), Dominic Hyde (see [9], [10], [11]) and perhaps Pablo Cobreros (see [4]). From the formal point of view, an S-valuationist model for |$\mathcal{L}_{P}$| is a space |$P$|⁠, which is just a set of classical assignments |$\nu $|⁠, sometimes called points. Conceptually, it is meant to represent the idea that our practices do not uniquely determine one interpretation for the language. Some sentences can acceptably be considered true and acceptably be considered false, because, for instance, they are the application of a vague predicate to a borderline case of it. |$P$| then contains all these acceptable models. Different consequence relations can be defined, depending on whether we consider the previous phenomenon to be, respectively, a case of under determination or over determination of the language6. We will present them first on a Set–Set framework: Definition 2.6 (Global Consequence) |$\varGamma \vDash _{SG}\varDelta $| iff, for every space |$P$|⁠, if every |$\gamma \in \varGamma $| is true at every point |$\nu $| in |$P$|⁠, then some |$\delta \in \varDelta $| is true at every |$\nu $| in |$P$|⁠. Definition 2.7 (Subconsequence) |$\varGamma \vDash _{SB}\varDelta $| iff, for every space |$P$|⁠, if for every |$\gamma \in \varGamma $| there is a (perhaps different) point |$\nu $| in |$P$| where it is true, then for some |$\delta \in \varDelta $| there is a (perhaps different) point |$\nu $| in |$P$| at which it is true. In modal logic, consequence is usually taken to be local, because truth is understood as truth-in-a-world, and truth is supposed to be what is preserved in a valid argument. Here, points are not possible worlds but admissible interpretations, and truth (simpliciter) is instead equated with either supertruth or with subtruth. In the first case, the resulting logic is called SG, and it corresponds to a theory of vagueness as semantic under-determination, because borderline sentences are not considered neither (super)-true nor (super)-false. In the second case, the resulting logic is called SB, and it corresponds to a theory of vagueness as semantic over-determination, because borderline sentences are considered both (sub)-true and (sub)-false. All members of the S-valuationist family share the idea that, although there is some sort of semantic failure in the language, this must not compel us to reject classical principles. Contrary to other under-determination logics like K37, LEM should always be true, and contrary to other over-determination logics like LP, contradictions should always be untrue. The way to achieve this is through non-compositionality: even though there can be points in the space where |$\phi $| is true and points where |$\neg \phi $| is true, |$\phi \vee \neg \phi $| will be true and |$\phi \wedge \neg \phi $| will be false in all of them. More generally, when we restrict the previous definitions to the case where |$\varGamma $| and |$\varDelta $| are either empty or contain only a single formula, both SG and SB coincide with CL. Nevertheless, the equivalence breaks down when we let |$\varGamma $| in the case of SB and |$\varDelta $| in the case of SG—be any kind of set. There is an ongoing debate on whether multi-premise classical validity is more natural (and hence, more important to preserve) than multi-conclusion validity. Since our interest is focused on the case where SB fails, we will keep on working with the ordinary, Set–Formula framework. In any case, it is worth noting that both logics are dual in the following sense (see [5]). Definition 2.8 (Duality) L and L|$^{\mathbf{\ast }}$| are duals if |$\varGamma \vDash _{L}\varDelta $| iff |$\neg \varDelta \vDash _{L{\ast }}\neg \varGamma .$| Finally, the divergence between multi and single-premise arguments gives rise, in the case of SB, to the following phenomenon: \begin{equation*} \phi \wedge \neg \phi \vDash _{SB}\psi \qquad\textrm{but}\qquad \phi , \neg \phi \nvDash _{SB}\psi \end{equation*} So SB indeed satisfies our previous definition of paraconsistency, even though the validity of the first argument may justify calling it only weakly so. 3 Classical recovery for SB 3.1 First-level recovery As we said, languages which get S-valuational treatments often contain vague predicates and hence, they also sometimes include intensional operators—usually called determinacy operators—designed to define the concept of borderline case. The sort of modal interpretation for |$\Box $| will depend on one’s understanding of the nature of borderline cases, and in particular, on the existence of higher order vagueness. For the sake of simplicity, let us assume there is no such thing, so we do not need to worry about the possible difference between |$\Box \phi $| and |$\Box \Box \phi $|⁠. Then, we can do without any accessibility relation: |$\Box \phi $| is true in a point if and only if |$\phi $| is true at every point. Instead of taking |$\Box $| as a primitive, we continue to employ the same signature and use a determinacy operator meaning that |$\phi $| is supertrue or superfalse. The definition for the box is then as follows: \begin{equation*} \Box\phi\buildrel d \over = \circ\phi\wedge\phi. \end{equation*} We call SB|$\circ $| the logic of the s-models for |$\mathcal{L}_P\circ $|⁠, with the previous interpretation for |$\circ $| and a subvaluationist consequence relation. It is easy to check that |$\circ $| satisfies the requirements for a consistency operator. First, it is not in general true that |$\{\circ \phi , \phi \}$| and |$\{\circ \phi , \neg \phi \}$| imply every formula |$\psi $|⁠, since, for some choices of |$\phi $| and |$\psi $|⁠, there are models where |$\phi $| is true (false) at every point and |$\psi $| is true at none. Second, there is no model where |$\{\circ \phi , \phi , \neg \phi \}$| are all subtrue for any |$\phi $|⁠, because for |$\circ \phi $| to be subtrue, it is required that either |$\phi $| or |$\neg \phi $| be supertrue, and the supertruth of one precludes the subtruth of the other. This is enough for the PGE to be valid. Hence, the resulting logic is a LFI. We want then to prove a result similar to the one about mbC+, and for that, we again need both logics to share the same language. As before, we take CL|$\circ $|, whose models are a subset of the models of SB|$\circ $| (i.e. those where the space contains only one point). For the other direction, we need to identify which SB|$\circ $|—models can be turned into classical ones. We call an SB|$\circ $| model classical with respect to |$\varGamma $| if all its points agree on the value they assign to formulas in |$\varGamma $| and all of their subformulas. The relation of expansion between models remains the same. Fact 3.1 (Upwards Propagation of Consistency) If an SB|$\circ $|-model |$P$| is classical with respect to a set of propositional letters, it is classical with respect to every formula containing only those letters. Proof. By induction on the complexity of the formula. Fact 3.2 (Expansion of Consistency) If an SB|$\circ $|-model |$P$| is classical with respect to |$\varGamma $|⁠, then given a superset |$\varPi $| of |$\varGamma $|⁠, |$P$| can be expanded into a model |$P{\ast }$| equivalent to it with respect to |$\varGamma $|⁠, which is classical also with respect to |$\varPi $|⁠. Proof. Let |$P$| be a space classical with respect to |$\varGamma $| and |$\varPi $| a superset of |$\varGamma $|⁠. For any |$p\in \{\varPi \}_{p} - \{\varGamma \}_{p}$|⁠, if it receives different values in different points in |$P$|⁠, take a space |$P{\ast }$| where instead of the points where |$\nu (p)=1$| (0) there are points where |$\nu (p)=0$| (1), but where all |$p$| in |$\{\varGamma \}_{p}$| remain the same. Because of the Upwards Propagation of Consistency, this space |$P{\ast }$| is classical with respect to |$\varPi $|⁠. Fact 3.3 (Classical Recapture) |$\varGamma \vDash _{CL\circ }\phi $| iff |$\circ \{\varGamma \}_{p}$|⁠, |$\circ \{\phi \}_{p}, \varGamma \vDash _{SB\circ }\phi $| Proof. Right to left is immediate by the inclusion between models. Left to right, assume there is an SB|$\circ $| countermodel of |$\circ \{\varGamma \}_{p}$|⁠, |$\circ \{\phi \}_{p}, \varGamma \Vdash \phi $| i.e. a space |$P$| such that |$\nu (\gamma )=1$| in some point |$\nu $| for every |$\gamma \in \varGamma $| and |$\nu (\phi )=1$| in every point |$\nu $| and which is classical with respect to |$\{\varGamma \}_{p}\bigcup \{\phi \}_{p}$|⁠. Because of Upwards Propagation of Consistency, |$P$| is classical with respect to |$\varGamma $| and |$\phi $|⁠. Hence, because of Expansion of Consistency, it can be expanded to a space which is equivalent with respect to |$\varGamma $| and |$\phi $| and which is classical with respect to every formula. Any point in that space is a CL|$\circ $|-countermodel for |$\varGamma \Vdash \phi .$| Given that, according to our definition, in CL|$\circ $||$\nu (\Box \phi )=\nu (\phi )$|⁠, someone could worry that we are translating truth in CL|$\circ $| into supertruth in SB|$\circ $|, which can seem to undermine the idea behind the definition of logical consequence for SB|$\circ $|. Nevertheless, this problem is only illusory, given that in CL|$\circ $|, it is also the case that |$\nu (\lozenge \phi )=\nu (\phi )$| where |$\lozenge $| is defined in the usual way from |$\Box $|⁠. What this means is that truth collapses with both supertruth and subtruth in classical contexts, which is perfectly sensible. 3.2 Total recapture Does this result show that we have succeeded in our project of classical recovery? Not just yet. There are many things that go by the name of ‘Classical logic’, some of them of more conceptual import, like bivalence, and some of them more formal. Among the second group, we have to consider not only inferences, but also metainferences, i.e. inferences from sets of inferences to inferences. For them, we can introduce a notion of metavalidity: Definition 3.4 (Global Metavalidity) A metainference is globally L-valid (g-L-valid) iff the conclusion is L-valid when the premises are. As [19] famously pointed out, the addition of modal operators to SG induces the failure of some classically valid schematic forms of metainferences. For instance, take the schematic metainference (⁠|$\vee $|L), which is valid in SG: \begin{equation*} \frac{\varGamma ,\phi \Vdash \lambda \qquad\varGamma ,\psi \Vdash \lambda} {\varGamma ,\phi \vee \psi \Vdash \lambda}\vee L \end{equation*} When we add |$\Box $|⁠, it becomes invalid, as this counterexample—where the premises are SG-valid and the conclusion is not—shows \begin{equation*} \frac{p\Vdash \Box p \vee \Box \neg p \qquad \neg p\Vdash \Box p \vee \Box \neg p}{p\vee \neg p\Vdash \Box p \vee \Box \neg p.}\end{equation*} It then comes as no surprise that the same phenomenon arises with the dual case. Following the previous example, the corresponding failure would be (⁠|$\wedge R$|⁠) \begin{equation*} \frac{\varGamma \Vdash \phi \qquad\varGamma \Vdash \psi} {\varGamma ,\Vdash \phi \wedge \psi} \wedge R\end{equation*} The next metainference provides a counterexample of the previous schema: \begin{equation*} \frac{\lozenge p \wedge \lozenge \neg p \Vdash p \qquad \lozenge p \wedge \lozenge \neg p \Vdash \neg p} {\lozenge p \wedge \lozenge \neg p \Vdash p \wedge \neg p}\end{equation*} Given the aforementioned CL|$\circ $|-equivalence between |$\phi $| and |$\lozenge \phi $|⁠, the premise of the conclusion of the metainference is a contradiction, posing no problem for CL|$\circ $|. Defenders of SG usually answer to an objection of this sort along the lines of ‘when no D operator [|$\Box $| in our signature] is involved, normal classical rules of inference remain intact’ [13]. The problem is, from the point of view of the recapture project—and contrary to what happens when the target is vague predicates—the goal was the opposite of pushing classical logic further away. What should happen is not that classical rules obtain when there is no consistency operator in play, but exactly the contrary. So it seems that we are robbing Peter to pay Paul. Is it possible to fix this new issue using the same operator? The answer is yes and the strategy is the basically as before. For example, to recover (⁠|$\wedge $|R), we just need to add the following assumptions of consistency: \begin{equation*} \frac{\circ \{\varGamma \}_{p}, \circ \{\phi \}_{p}, \varGamma \Vdash \phi \qquad \circ \{\varGamma \}_{p}, \circ \{\psi \}_{p},\varGamma \Vdash \psi } {\circ \{\varGamma \}_{p}, \circ \{\phi \}_{p},\circ \{\psi \}_{p},\varGamma \Vdash \phi \wedge \psi} \wedge R\end{equation*} This modified schematic metainference is now g-valid8. In fact, we can easily prove the corresponding general result: Remark 3.5 (Classical Recapture 2) \begin{equation*} \frac{\varGamma _{1} \Vdash \phi _{1}\quad \dots\quad \varGamma _{n} \Vdash \phi _{n}}{\varPi \Vdash \psi}\end{equation*} is g-CL|$\circ $|-valid if and only if \begin{equation*} \frac{\circ \{ \varGamma _{1} \}_{p}, \circ \{ \phi _{1} \}_{p}, \varGamma _{1} \Vdash \phi _{1}\qquad \dots\qquad \circ \{ \varGamma _{n} \}_{p}, \circ \{ \phi _{n} \}_{p}, \varGamma _{n} \Vdash \phi _{n}}{\circ \{ \varPi \}_{p},\ldots , \circ \{ \psi \}_{p}, \varPi \Vdash \psi }\end{equation*} is g-SB|$\circ $|-valid. Proof. From right to left, if, for some substitution instance, the premises of the first metainference are CL|$\circ $|-valid but the conclusion is not, (i) because of Fact 3.3, the premises of the second metainference must be SB|$\circ $|-valid and (ii) because of the inclusion of the models, the conclusion of the second metainference must be invalid. From left to right, if, for some substitution instance, the premises of the second metainference are SB|$\circ $|-valid but the conclusion is not, (i) because of the inclusion of the models, the premises of the first metainference must be CL|$\circ $|-valid and (ii) because of Fact 3.3, the conclusion of the first metainference must be CL|$\circ $|-invalid. So, it seems that we can halt the impending disaster, and even better, we do not need extra resources to do so. Does the problem reappear at every level? A version of the previous example can easily show that the answer is yes. Take the double line to represent a metametainference, i.e. a consequence relation for metainferences (both the premises and the conclusion of the metametainference have no premises in this case): \begin{equation*} \underline{\underline{\overline{\lozenge p \wedge \lozenge \neg p \Vdash p} \qquad\overline{\lozenge p \wedge \lozenge \neg p \Vdash \neg p}}}\end{equation*} \begin{equation*} {\overline{\lozenge p \wedge \lozenge \neg p \Vdash p \wedge \neg p}} \end{equation*} This is just a level-up version of the example we used before. It has g-CL|$\circ $|-valid conclusion and premises, so it preserves g-CL|$\circ $| validity but even though the premises are also g-SB|$\circ $| valid, the conclusion is not, so it does not preserve g-SB|$\circ $| validity. But again, we can use the same trick to recover metametainferences too. This means that the technical part will not be an issue. Nevertheless, we still need to answer the philosophical question, i.e. whether it was worth it to take the trouble of introducing a recovery operator, when that meant infecting the higher levels with a problem which was not there. We will argue that we might identify the real culprit somewhere else. 3.3 Kinds of metavalidity The previous definition of metavalidity is not the only possible one. Following [6], we say that a model confirms an argument when it is not a countermodel for it. Then, an alternative concept is the following. Definition 3.6 (Local Metavalidity) A metainference is locally L-valid iff every model that L-confirms the premises, L-confirms the conclusion. In classical logic, local validity for metainferences is stricter than its global counterpart, because there are cases which are g-L-valid merely because of the fact that its premises are not L-valid. For example: \begin{equation*} \frac{\Vdash p}{\Vdash q.}\end{equation*} This divergence disappears when we look at classically valid schemas of metainferences, whence the distinction is often overlooked. Nevertheless, in the subvaluationist case, the gap reopens. Given that a space confirms an argument if it either makes some premise superfalse or its conclusion subtrue, we can see in the following counterexample how the schematic metainference (⁠|$\wedge $|R) is not l-SB-valid, without any need to resort to modal operators: \begin{equation*} \frac{\Vdash q \qquad\Vdash \neg q}{\Vdash q \wedge \neg q.} \end{equation*} G-SB-validity is here achieved simply because there is no counterexample to (⁠|$\wedge $|R) where the premises are both SB-valid, which is exactly what the consistency operator provides. Even more, local validity not only invalidates (⁠|$\wedge $|R), but is also ‘paraconsistent’, in the sense that it invalidates the following meta-form of Explosion: \begin{equation*} \frac{\Vdash p \qquad \Vdash \neg p} {\Vdash q} \end{equation*} The dual can also be said of SG, if we take confirmation to be either making the premises not supertrue or making the conclusion supertrue. And a similar phenomenon arises in many other logics, like for instance LP without a |$\frac{1}{2}$| constant. What all these pieces of vocabulary are doing is precisely allowing us to express all semantic values with formulas. In the case of S-valuationism, without modalities we can express ``Supertruth'' -any tautology-, ``Superfalsity'' -any contradiction- but we cannot express their complements. 4 Conclusions We have seen how, when we try to incorporate a recovery operator to a subvaluationist logic, we can recapture classicality at the base level, but in doing so we lose the global validity of classical metainferences. We have also showed how this new issue can be solved by iterating the process of recapture at any level. There are two possible lessons one can learn from this. On the one hand, we could see this as a reductio of the very idea of classical recovery, if we think that little is gained by validating classical inferences in one level, when that means committing to an endless hierarchy of recapture processes. The other possibility is to deny that those inferences were valid at the base level, by rejecting the concept of global validity for this kind of weak languages. In this sense, it was not the operator which was wreaking havoc, but something that was already there—the ‘third semantic value’ —and that we just could not see, because of the poorness of our vocabulary. I will finish by commenting on another possible sense of classical recapture. One could expect SB|$\circ $| to validate, not every CL|$\circ $|-valid argument, but those of a modal language with a bona fide modal operator and with a local consequence relation, i.e. S5. Given that we are presenting SB with multiple premises, this choice would imply that SB|$\circ $| is not weaker than classical logic—as it is in our framework—but incomparable, given that it invalidates Explosion, but validates |$\lozenge \phi \Vdash \phi $| which S5 does not. Thus, to prove an equivalence result we would need to strengthen the premises on the ‘classical’ side also. However, this translation, as opposed to the one we used, is in this context conceptually more dubious. In SB|$\circ $|, |$\Box $| is supposed to mean consistent and true, whereas in S5, where everything is consistent but not everything is boxed, it has to mean something different (be it necessary truth or something akin to it). Hence, even though we can get a collapse result if we strengthen both logics, it is not clear what would be the philosophical significance of doing so, given that they are dealing with different concepts. Footnotes 1 The interested reader can find axiomatic presentations of the LFIs referenced in this paper for instance in [2]. With respect to S-valuationisms, there are not many proof-theoretical treatments; a modal tableaux system can be found in [5]. 2 The original presentation of LFIs is wider, and considers the case where the consistency of a formula is specified, not only by another formula, but possibly a set of them. We will leave aside this generalization—which bears no particular significance to our concerns—to simplify the definition a bit. We also left aside the weaker and stronger versions of the definition—which vary the placement of the quantifiers over sentences—again, without compromising anything of what will be said hereinafter. 3 See for instance [1]. 4 For a discussion on the concept of epistemological contradictions and LFIs, see [15]. 5 Although vagueness was indeed one of the applications Jaśkowski had in mind, the name of the theory comes from thinking about the logic that results from the combination of the assertions of different participants, who may not agree about everything. 6 There are actually many more ways to define logical consequence for an S-valuationally interpreted language, and one could make a case for several (if not all) of them to be the best philosophically motivated one. See for instance [18], [4]. 7 K3 is like LP, but taking only 1 to be a designated value. 8 Notice that—although structurally similar—we are not dealing with proper sequent rules, and hence, we do not need to present them as pure rules, as may be desirable in proof theory. References [1] W. Carnielli and A. Rodrigues . Towards a philosophical understanding of the logics of formal inconsistency . Manuscrito , 38 , 155 – 184 , 2015 . Google Scholar Crossref Search ADS [2] W. Carnielli and M. Coniglio . Paraconsistent Logic: Consistency, Contradiction and Negation. Logic, Epistemology, and the Unity of Science . Springer International Publishing , 2016 . [3] W. Carnielli , M. E. Coniglio and J. Marcos . Logics of formal inconsistency . In Handbook of Philosophical Logic , D. Gabbay and F. Guenthner eds, pp. 1 – 93 . Netherlands , Springer , 2007 . [4] P. Cobreros . Supervaluationism and classical logic . In Vagueness in Communication , R. Nouwen, R. van Rooij, U. Sauerland, and H.-C. Schmitz eds, pp. 51 – 63 . Springer, Berlin Heidelberg , 2011 . [5] P . Cobreros . Vagueness: subvaluationism . Philosophy Compass , 8 , 472 – 485 , 2013 . Google Scholar Crossref Search ADS [6] B. Dicher and F. Paoli . St, lp and tolerant metainferences . In Graham Priest on Dialetheism and Paraconsistency , C. Bakent and T. Ferguson eds. Springer , 2018 . [7] K . Fine . Vagueness, truth and logic . Synthese 30 , 265 – 300 , 1975 . Google Scholar Crossref Search ADS [8] B. C. van Fraassen . Singular terms, truth-value gaps, and free logic . Journal of Philosophy , 63 , 481 – 495 , 1996 . Google Scholar Crossref Search ADS [9] D. Hyde . From heaps and gaps to heaps of gluts . Mind , 106 , 641 – 660 , 1997 . Google Scholar Crossref Search ADS [10] D. Hyde . Vagueness, Logic and Ontology (Ashgate New Critical Thinking in Philosophy) . Ashgate , 2008 . [11] D. Hyde . The prospects of a paraconsistent response to vagueness . In Cuts and Clouds: Vaguenesss, its Nature, and its Logic , R. Dietz and S. Moruzzi eds., pp. 385 – 405 . Oxford University Press , 2009 . [12] S. Jaśkowski . Propositional calculus for contradictory deductive systems . Studia Logica 24 , 143 – 160 , 1969 . Google Scholar Crossref Search ADS [13] R. Keefe . Theories of Vagueness (Cambridge Studies in Philosophy) . Cambridge University Press , 2000 . [14] R. Keefe . Vagueness: supervaluationism . Philosophy Compass , 3 , 315 – 324 , 2008 . Google Scholar Crossref Search ADS [15] N. Loguercio and D. Szmuc . Remarks on the epistemic interpretation of paraconsistent logic . Principia: An International Journal of Epistemology , 22 , 153 – 170 , 2018 . Google Scholar Crossref Search ADS [16] L. D. Rosenblatt . Two-valued logics for naive truth theory . Australasian Journal of Logic , 12 , 2015 . [17] S. Shapiro . Vagueness in Context . Oxford Scholarship Online. Philosophy Module . Clarendon Press , 2006 . [18] A. C. Varzi . Supervaluationism and its logics . Mind , 116 , 633 – 676 , 2007 . Google Scholar Crossref Search ADS [19] T. Williamson . Vagueness (Problems of Philosophy) . Routledge , 1994 . © The Author(s) 2018. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permission@oup.com. This article is published and distributed under the terms of the Oxford University Press, Standard Journals Publication Model (https://academic.oup.com/journals/pages/open_access/funder_policies/chorus/standard_publication_model) TI - Subvaluationism and classical recapture JF - Logic Journal of the IGPL DO - 10.1093/jigpal/jzy062 DA - 2018-11-27 UR - https://www.deepdyve.com/lp/oxford-university-press/subvaluationism-and-classical-recapture-0I5sysSTi6 SP - 1 VL - Advance Article IS - DP - DeepDyve ER -