Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

EPISTEMIC MULTILATERAL LOGIC

EPISTEMIC MULTILATERAL LOGIC s1sIntroductionsModal expressions such as might and must can be used epistemically, for instance when one says that the Twin Prime Conjecture might be true and it might be false. When used in this way, they are known as epistemic modals and have been widely discussed in the recent literature, both in formal semantics and in philosophy. Challenges have been issued to the classic contextualist approach (Kratzer, 1977, 2012; DeRose, 1991) which aim to motivate relativist (MacFarlane, 2014), dynamic (Veltman, 1996; Willer, 2013), probabilistic (Swanson, 2006; Moss, 2015) and expressivist (Yalcin, 2007; Charlow, 2015) accounts.sDespite the recent flurry of interest, however, no general logical framework is available for reasoning involving epistemic modality. In this paper, we present such a framework. Technically, the framework is obtained by extending bilateral systems—that is, systems in which formulae are decorated with signs for assertion and rejection (Rumfitt, 2000)—with a marker for the speech act of weak assertion (Incurvati & Schlöder, 2019). Philosophically, the framework is developed from an inferentialist perspective. It respects well-known constraints on the acceptability of inference rules and hence provides the basis for an account of epistemic modality according to which the meanings of epistemic modals is given by their inferential role.sAlthough the logical framework is motivated from an inferentialist standpoint, it has the resources to account for several phenomena surrounding epistemic modality that have featured prominently in the recent literature. We focus here on Yalcin sentences (Yalcin, 2007), i.e. sentences like It is raining and it might not be raining, and related phenomena. We show that, when supplemented with an inferentialist account of supposition, the logical framework predicts that Yalcin sentences are infelicitous and remain so under supposition. By appealing to the notion of supposability, our explanation may be extended to generalizations of Yalcin sentences due to Santorio (2017) and Mandelkern (2019).s1sWe begin by making a case for developing a logical framework for reasoning about epistemic modality using the tools of proof-theoretic semantics (§2). We then present the logical framework, which we dub epistemic multilateral logic (§3). We give a model theory for epistemic multilateral logic and prove that the logic is sound and complete with respect to this model theory (§4). We apply our framework, suitably extended with an account of supposition, to Yalcin sentences and generalized Yalcin sentences (§5). One remarkable feature of epistemic multilateral logic is that it extends classical logic, unlike several current approaches to epistemic modality. Indeed, epistemic multilateral logic extends the modal logic s$\mathbf {S5}$s. Issues for systems dealing with epistemic modality that extend s$\mathbf {S5}$shave been recently discussed by Schulz (2010) and Bledin & Lando (2018). We argue that these issues can be dealt with by distinguishing between two notions of proof-theoretic consequence in epistemic multilateral logic (§6). We conclude by outlining some directions for future work (§7).s2sInferentialism, bilateralism and multilateralismsReasoning involving epistemic modality is widespread. For instance, suppose that one believes that Jane must be at the party. Then, it seems, one can conclude that Jane is at the party (von Fintel & Gillies, 2010). This is an instance of an inference pattern one may call epistemic weakening. But another commonly recognized inference allows us to conclude Jane must be at the party from Jane is at the party. It has been argued (see, e.g., Schulz, 2010) that this latter inference, an instance of epistemic strengthening, is importantly different in its logical status from epistemic weakening. Our proof theory will enable us to trace the difference precisely. In particular, it will allow us to distinguish between derivations that clearly preserve evidence (such as those formalizing epistemic weakening) and those for which this is more controversial but can nonetheless be taken to preserve commitment (such as those formalizing epistemic strengthening). This is an advantage of a proof-theoretic approach: by inspecting the rules featuring in a given derivation, one can determine which style of reasoning it employs and hence the epistemic properties it preserves. Current accounts of epistemic modality are presented in a model-theoretic framework, and a proof theory for epistemic modality is at present not available.sWe aim to repair the situation. We will develop a proof theory for epistemic modality from an inferentialist perspective. Inferentialism is the view that the meaning of an expression is given by its role in inferences. In the case of a logical expression, it has often been contended that its inferential role can be captured by its introduction and elimination rules in a natural deduction system. Thus, logical inferentialism is the view that the meaning of logical constants is given by such rules.sLogical inferentialism faces the problem that not every pair of introduction and elimination rules seems to confer a coherent meaning on the logical constant involved. Arthur Prior (1960) first raised the problem by exhibiting the connective s$\mathsf {tonk}$s.sAdding s$\mathsf {tonk}$sto a logical system makes it trivial: any sentence follows from any sentence whatsoever. Prior concluded that this sinks logical inferentialism. Inferentialists reacted by formulating criteria for the admissibility of inference rules which would rule out problematic constants such as s$\mathsf {tonk}$s. One prominent such criterion is harmony, the requirement that, for any given constant, there should be a certain balance between its introduction and elimination rules. We will present a natural deduction system for epistemic modality which complies with the harmony constraint and other standard proof-theoretic constraints.sIn many cases, for instance when dealing with modal vocabulary, it is not straightforward to satisfy these constraints, witness the search for suitable natural deduction rules for the modal logics s$\mathbf {S4}$sand s$\mathbf {S5}$s(see, e.g., Poggiolesi & Restall, 2012; Read, 2015). Rather than being a hindrance, however, proof-theoretic constraints serve to narrow down the range of options available when developing a system for epistemic modality. This represents a further advantage of the proof-theoretic approach over the model-theoretic one, which is instead presented with several competing candidates, all of which appear plausible given the linguistic data.sThe situation here should be familiar from the debate over the underdetermination of theory by data in the philosophy of science. Model-theoretic semantics is typically pursued in a bottom-up fashion by surveying a wide range of data and attempting to define truth-conditions as appropriate generalizations that account for these data. Without any further constraints, it is wildly underdetermined what these truth-conditions should be. Proof-theoretic semantics, by contrast, proceeds in a top-down manner by developing a theory which satisfies a number of theoretical constraints (such as harmony) and testing whether the theory matches the data. We adopt the methodology of proof-theoretic semantics here. In §3, we develop an account of epistemic modality which satisfies the proof-theoretic constraints but is otherwise motivated only by simple data involving epistemic modals. We test it against more involved data in §5 and §6.sBesides seemingly ruling out s$\mathsf {tonk}$s, however, proof-theoretic constraints appear to sanction an intuitionistic logic, since the rules for classical negation in standard natural deduction systems do not seem to be harmonious.s2sHowever, this is so because standard natural deduction systems only deal with asserted content: so-called bilateral systems—systems in which rejected content is also countenanced—are classical and satisfy the harmony constraint. Taking the rules of these systems to be meaning-determining leads to bilateralism, the view that the meaning of the logical constants is given by conditions on assertion and rejection. Here, assertion and rejection are speech acts expressing attitudes: assertion expresses assent, rejection expresses dissent. Importantly, rejection is, contra Frege (1919), distinct from, and not reducible to, the assertion of a negation.sNonetheless, in standard bilateral systems (Smiley, 1996; Rumfitt, 2000), rejection and negative assertion have the same inference potential. That is, one can pass from the rejection of a proposition to its negative assertion, and vice versa. This is clearly encapsulated by the rules for negation of standard bilateral systems (see, e.g., Rumfitt, 2000), using s$+\hspace {-0.1em}$sand s$\ominus \hspace {-0.08em}$sas, respectively, markers for assertion and rejection.sAlthough these rules are harmonious, they appear not to match certain important linguistic data about rejection. For the presence of these rules means that in standard bilateral systems rejection is strong: from the rejection of p it is always possible to infer the assertion of not p. However, linguistic evidence suggests that this is not always possible: rejections can be weak. Consider, for instance, the following dialogue, based on Grice (1991): s(1)sAlice: X or Y will win the election.sBob: No, X or Y or Z will win.sBob is here rejecting what Alice said: he is expressing dissent from X or Y will win the election. But Bob’s rejection is weak. It would be mistaken to infer that he is assenting to neither X nor Y will win the election. His utterance leaves open the possibility that X will win the election or that Y will.sIn Incurvati & Schlöder (2017) we develop a bilateral logic in which rejection is weak. In this logic, however, the rules for negation are, on the face of it, not harmonious. The issue can be addressed by extending the bilateralist approach to a multilateralist one. In Incurvati & Schlöder (2019) we present evidence for the existence of a speech act of weak assertion, linguistically realized using perhaps in otherwise assertoric contexts. Thus, in uttering (2a) in standard contexts one performs the familiar assertion (henceforth strong assertion) of it is raining. By contrast, an utterance of (2b) serves to realize the weak assertion of it is raining. s(2)sa. It is raining.sb. Perhaps it is raining.sStrong assertion, rejection and weak assertion can be embedded within a Stalnakerian model of conversation. Stalnaker takes the essential effect of strongly asserting p to be that of proposing the addition of p to the common ground. In Incurvati & Schlöder (2017) we argue that the essential effect of rejecting p is to prevent the addition of p to the common ground (which is not the same as proposing to add not p to it). And in Incurvati & Schlöder (2019) we argue that the essential effect of weakly asserting p is to prevent the addition of not p to the common ground.sIn the next section, we present a multilateral system involving weak assertion, strong assertion and (weak) rejection. Before doing so, however, we should clarify how consequence is best understood within a multilateral setting. Consider, for instance, the inference from the strong assertion of not A to the rejection of A. The validity of this inference does not mean that anyone strongly asserting not A is also explicitly rejecting A (see Dutilh Novaes, 2015 for a similar point). Nor does it mean that anyone assenting to not A is also in the cognitive state of dissenting from A: since arbitrarily many further attitudes follow from assent to any given proposition (e.g. by disjunction introduction), this notion of consequence would imply that anyone who expresses a single attitude towards a single proposition must have unboundedly many attitudes to unboundedly many propositions. This is implausible (Harman, 1986; Restall, 2005).sProperly understood, the multilateralist notion of consequence is social. The proof rules determine which attitudes one is committed to have (see also Searle & Vanderveken, 1985; Dutilh Novaes, 2015; Incurvati & Schlöder, 2017). Someone who explicitly assents to not A need not also hold the attitude of dissent towards A, since she may fail to draw the inference licensed by (s$+\hspace {-0.1em}\neg $sE.). Nevertheless, we may say that she is committed to dissenting from A, since if the inference is pointed out to her, she must dissent from A or admit to a mistake.s3sEpistemic multilateral logicsWe are now ready to present epistemic multilateral logic (s$\mathsf {EML}$s), a multilateral system for reasoning about epistemic modality. As in standard bilateral systems, in s$\mathsf {EML}$sformulae are signed. More specifically, the language s$\mathcal {L}_{EML}$sof s$\mathsf {EML}$sis characterized as follows. We have a countable infinity of propositional atoms s$p_1, \ldots , p_n$s. We then say that A is a sentence of s$\mathcal {L}_{EML}$sif it belongs to the smallest class containing the propositional atoms and closed under applications of the unary connective s$\neg $s, the binary connective s$\wedge $sand the operator s$\Diamond $s. We define s$A \rightarrow B$sin the usual classical way, that is as s$\neg (A \wedge \neg B)$s. Moreover, we define s$\Box A$sas s$\neg \Diamond \neg A$s. Finally, we say that s$\varphi $sis a signed formula if it is obtained by prefixing a sentence of s$\mathcal {L}_{EML}$swith one of s$+\hspace {-0.1em}$s, s$\ominus \hspace {-0.08em}$sand s$\oplus \hspace {-0.08em}$s. These are force-markers and stand, respectively, for strong assertion, (weak) rejection and weak assertion. s$\mathcal {L}_{EML}$salso includes a symbol s$\perp $s, but this will be considered neither a sentence nor a (0-place) connective, as is sometimes the case, but a punctuation mark indicating that a logical dead end has been reached (see Tennant, 1999; Rumfitt, 2000). In general, we use lowercase Greek letters for signed formulae and uppercase Latin letters for unsigned sentences.s3.1sProof theorysThe proof theory of s$\mathsf {EML}$sis formulated by means of natural deduction rules. We briefly discuss the rules as described in Incurvati & Schlöder (2019).sFor conjunction, we simply take the standard rules and prefix each sentence with the strong assertion sign.sThe rules say that from two strongly asserted propositions one can infer the strong assertion of their conjunction, and that from a strongly asserted conjunction one can infer the strong assertion of either conjunct.sLinguistic analysis reveals that weakly asserting a proposition has the same consequences as rejecting its negation (see Incurvati & Schlöder, 2019). Hence, the weak assertion of A and the rejection of not A should be interderivable. Linguistic analysis also shows that rejecting a proposition has the same consequences as weakly asserting its negation. Thus, the rejection of A and the weak assertion of not A should be interderivable too. This licenses the following rules for negation.sThe rules for the epistemic possibility operator are based on two observations. The first is that uttering perhaps A has the same consequences as uttering it might be that A. It follows that the weak assertion of A and the strong assertion of s$\Diamond A$sshould be interderivable.sThe second observation is that uttering perhaps A has the same consequences as uttering perhaps it might be that A. It follows that the weak assertion of A and the weak assertion of s$\Diamond A$sshould be interderivable too.sThe rules for s$\neg $sand s$\Diamond $sare clearly harmonious, since the elimination rules are the inverses of the corresponding introduction rules. They are also simple and pure in Dummett’s (1991, p. 257) sense: only one logical constant features in them and this constant occurs only once.sWe have presented the operational rules of s$\mathsf {EML}$s—that is, rules for the introduction and elimination of its operators. But we are not quite finished yet. For in multilateral systems (just as in bilateral systems) we also need rules that specify how the speech-act markers interact. Such rules are known as coordination principles and are needed to validate mixed inferences—inferences involving propositions that are uttered with different forces and are therefore prefixed by different signs. One example, involving strong assertion and rejection, is the seemingly valid pattern of inference that allows one to conclude the rejection of p from the strong assertion of if p, then q and the rejection of q (see Smiley, 1996). Another example, involving weak and strong assertion, is the inference pattern of weak modus ponens, which allows one to conclude the weak assertion of q from the weak assertion of p and the strong assertion of if p, then q (see Incurvati & Schlöder, 2019). That is, from if p, then q and perhaps p one may infer perhaps q.sAs in standard bilateral systems, the rules coordinating strong assertion and rejection characterize these speech acts as contraries.s(Rejection) states that strong assertion and rejection are incompatible: it is absurd to both propose and prevent the addition of the same proposition to the common ground. (SRs$_1$s) says that if, in the presence of one’s extant commitments, strongly asserting a proposition leads to absurdity, then one is already committed to preventing the addition of that proposition to the common ground. A similar reading of (SRs$_2$s) is available. The conjunction of (SRs$_1$s) and (SRs$_2$s) is known as the Smileian reductio principle, after Smiley (1996).sThe remaining coordination principles characterize weak assertion as subaltern to its strong counterpart. We write s$+\vdots$sfor a derivation in which all premisses and undischarged assumptions are strongly asserted sentences, i.e. formulae of the form s$+\hspace {-0.1em} A$s. Since s$\bot $sis treated as a punctuation mark, in (Weak Inference) we distinguish between the case in which one infers s$+\hspace {-0.1em} B$sin the subderivation and the case in which one infers s$\bot $s. In the former case, (Weak Inference) allows one to conclude s$+\hspace {-0.1em} B$s; in the latter case, it allows one to conclude s$\bot $s.s3s(Assertion) ensures that in performing the strong assertion of a proposition, one is committed to dissenting from its negation. (Weak Inference) ensures that weak assertion is closed under strongly asserted implication and hence that inferences like if p then q; perhaps p; therefore perhaps q are valid. For if one’s evidential situation sanctions perhaps p and one knows that any situation where p is the case is also a situation where q is the case, then one is entitled to conclude perhaps q.sThe restrictions on the subderivation of (Weak Inference) are due to the specificity problem (Incurvati & Schlöder, 2017). As noted by Imogen Dickie (2010), a correct strong assertion of A requires there being evidence for A. By contrast, the speech act of weak rejection is ‘messy’ in that it makes unspecific demands about evidence: a weak rejection of A may be correct because there is evidence against A, but may also be correct because of the (mere) absence of evidence for A (Incurvati & Schlöder, 2017). This means the following for the justification of (Weak Inference). Suppose that in the base context one’s evidential status sanctions perhaps A. To apply (Weak Inference), one then considers the hypothetical situation in which A is the case and attempts to derive B. In the hypothetical situation in which A is the case, there may be evidence that is not available in the base context (e.g. evidence for A). Thus, in the subderivation of (Weak Inference), one may not use rejected premisses from the base context that one rejects for lack of evidence. Due to the unspecificity of weak rejection, this means that one may not use any premisses that are weakly rejected. Since s$\oplus $sswitches with s$\ominus $sand s$\oplus $sswitches with s$+\hspace {-0.1em}\Diamond $s, one may also not use any weakly asserted premisses or apply s$\Diamond $s-Eliminations to strongly asserted premisses.s4s3.2sDerived rulessThis concludes the exposition of the proof theory of s$\mathsf {EML}$s, and we use s$\vdash $sto denote derivability in s$\mathsf {EML}$s. To simplify the presentation in the remainder of the paper, however, it will be useful to present some additional derived rules of s$\mathsf {EML}$s, characterizing the behavior of the primitive connectives under some of the speech acts not covered by the basic rules and the behavior of the defined connectives. This will also serve to give a flavour of how derivations in s$\mathsf {EML}$swork.sWe begin with a rule which specifies how to introduce the strong assertion of a negation.sProposition 3.1.sThe following rule is derivable in s$\mathsf {EML}$s.sProof.s$\mathsf {EML}$sderives (s$+\hspace {-0.1em}\neg $sI.) as follows.s□sNext, we have rules specifying the behavior of conjunction under weak assertion.s5sProposition 3.2.sThe following rules are derivable in s$\mathsf {EML}$s.sProof.s$\mathsf {EML}$sderives (s$\oplus \hspace {-0.08em}\land $sI.s$_1$s) as follows.sThe case of (s$\oplus \hspace {-0.08em}\land $sI.s$_2$s) is analogous. (s$\oplus \hspace {-0.08em}\land $sE.s$_1$s) follows from (Weak Inference):s(s$\oplus \hspace {-0.08em}\land $sE.s$_2$s) is analogous. □sNote that these derived rules are applicable in arbitrary proof contexts, even when their premisses are derived using s$\Diamond $s-Elimination rules. While the derivations make use of (s$+\hspace {-0.1em}\neg $sI.) and (Weak Inference)—rules that disallow s$\Diamond $s-Eliminations—the applications of (s$+\hspace {-0.1em}\neg $sI.) and (Weak Inference) occur within self-contained subderivations and hence are correct irrespective of the wider proof context.s3.2.1sThe material conditionalsAs mentioned, we are taking the conditional s$A \rightarrow B$sto be defined as s$\neg (A \wedge \neg B)$s. Given the classicality of our calculus (to be shown below), this means that the conditional will be material. The following rules for strongly asserted conditionals can be derived.sProof.s(s$+\hspace {-0.1em}\rightarrow $sI.) is derivable as follows.sAnd (s$+\hspace {-0.1em}\rightarrow $sE.) is derivable as follows:s□sIn addition, one can derive the rule of weak modus ponens (WMP).sProof.s□sBy (s$+\hspace {-0.1em}\Diamond $sI.) and (s$+\hspace {-0.1em}\Diamond $sE.), (WMP) entails the derivability of epistemic modus ponens (EMP).s3.2.2sNecessity modals$\Box A$sis defined as s$\neg \Diamond \neg A$s. The following are derived rules for s$\Box $s:sProof.s(s$+\hspace {-0.1em}\Box $sI.) is derivable as follows:s(s$+\hspace {-0.1em}\Box $sE.) is derivable as follows:s□sNote that the derivability of (s$+\hspace {-0.1em}\Box $sI.) does not imply that s$+\hspace {-0.1em} (A\rightarrow \Box A)$s, since the derivation of s$+\hspace {-0.1em} \neg \Diamond \neg A$sfrom s$+\hspace {-0.1em} A$suses a s$\Diamond $s-Elimination rule, which rules out an application of (s$+\hspace {-0.1em}\rightarrow $sI.). In fact, there is no derivation of s$+\hspace {-0.1em} (A\rightarrow \Box A)$sin s$\mathsf {EML}$s, as shown by the soundness of s$\mathsf {EML}$swith respect to s$\mathbf {S5}$smodulo an appropriate translation (see §4).s3.3sClassicalitysWe now show that, in a defined sense, the logic of strong assertion extends classical logic (in much the same way that normal modal logics extend classical logic).sLet s$\sigma : \text {At} \rightarrow \text {wff}_{\text {EML}}$sbe any mapping from propositional atoms to formulae in the language s$\mathcal {L}_{EML}$sof s$\mathsf {EML}$s. If A is a formula in the language s$\mathcal {L}_{PL}$sof propositional logic, write s$\sigma [A]$sfor the s$\mathcal {L}_{EML}$s-formula that results from replacing every atom p in A with s$\sigma (p)$s. Moreover, let s$\models ^{\text {CPL}}$sbe the consequence relation of classical propositional logic. We have:sTheorem 3.3s(Supra-Classicality).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{PL}$s-formulae and A an s$\mathcal {L}_{PL}$s-formula. If s$\Gamma \models ^{\text {CPL}} A$s, then s$\{+\hspace {-0.1em} \sigma [B] \mid B \in \Gamma \} \vdash +\hspace {-0.1em}\sigma [A]$s.sProof.sSince s$\mathsf {EML}$svalidates modus ponens, it suffices to show that the strongly asserted versions of the axioms of the propositional calculus are theorems of s$\mathsf {EML}$s. That is, for arbitrary A, B and C in s$\mathcal {L}_{EML}$s, we have: s$$ \begin{align*} \begin{array}{l} \vdash +\hspace{-0.1em}(A \rightarrow \neg\neg A),\\ \vdash +\hspace{-0.1em}(\neg\neg A\rightarrow A),\\ \vdash +\hspace{-0.1em}((A \rightarrow B) \rightarrow (\neg B \rightarrow \neg A)),\\ \vdash +\hspace{-0.1em}((A \rightarrow (B \rightarrow C)) \rightarrow ((A \rightarrow B) \rightarrow (A \rightarrow C))),\\ \vdash +\hspace{-0.1em}((A \rightarrow (B \rightarrow C)) \rightarrow (B \rightarrow (A \rightarrow C))),\\ \vdash +\hspace{-0.1em}(A \rightarrow (B \rightarrow A)). \end{array} \end{align*} $$sThese are easy to check (see Incurvati & Schlöder, 2019). □sTogether with the Soundness result (Theorem 4.1) we will prove in §4, this yields the following corollary.sCorollary 3.4s(Classicality).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{PL}$s-formulae and A an s$\mathcal {L}_{PL}$s-formula. s$\Gamma \models ^{\text {CPL}} A$siff s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash +\hspace {-0.1em} A$s.sProof.sThe left-to-right direction follows from Theorem 3.3. The right-to-left direction is a corollary of s$\mathbf {S5}$s-Soundness (Theorem 4.1): if s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash +\hspace {-0.1em} A$s, then s$\{\Box B \mid B \in \Gamma \} \models ^{\mathbf {S5}} \Box A$s, which for propositional s$\Gamma $sand A entails s$\Gamma \models ^{\text {CPL}} A$s, since all worlds in an s$\mathbf {S5}$s-model are models of classical propositional logic. □s3.4s$\mathsf {EML}$sextends s$\mathbf {S5}$sWe now show that s$\mathsf {EML}$sextends s$\mathbf {S5}$s, i.e. that the logic of strong assertion validates every s$\mathbf {S5}$s-valid argument. To this end, we shall prove that if A is an s$\mathbf {S5}$s-tautology, then its strongly asserted counterpart is a theorem of s$\mathsf {EML}$s. That is:sTheorem 3.5.sIf s$\models ^{\mathbf {S5}} A$s, then s$\vdash +\hspace {-0.1em} A$s.sThis will yield the desired result.sCorollary 3.6.sIf s$\Gamma \models ^{\mathbf {S5}} A$s, then s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash +\hspace {-0.1em} A$s.sProof.sIf s$\Gamma \models ^{\mathbf {S5}} A$s, then there is a finite s$\Gamma '$ssuch that s$\models ^{\mathbf {S5}} \bigwedge _{B\in \Gamma '} B \rightarrow A$s. By Theorem 3.5, s$\vdash +\hspace {-0.1em} (\bigwedge _{B\in \Gamma '} B \rightarrow A)$s. Since the (s$+\hspace {-0.1em}\rightarrow $sE.) rule is derivable in s$\mathsf {EML}$s, it follows that s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash +\hspace {-0.1em} A$s. □sTowards the proof of Theorem 3.5, we first prove a technical lemma.sLemma 3.7.sThe following rule is derivable in s$\mathsf {EML}$s.sProof.s□sWe are now ready to prove the main result of this section.sProofsof Theorem 3.5s$\mathsf {EML}$sproves all substitution-instances of classical tautologies (Theorem 3.3). Moreover, by the (s$+\hspace {-0.1em}\Box $sI.) rule, we have that if s$\vdash +\hspace {-0.1em} A$sthen also s$\vdash +\hspace {-0.1em}\Box A$s. Thus, it suffices to show that the s$\mathbf {KT5}$saxioms are derivable in s$\mathsf {EML}$s.sAxiom s$\mathbf {K}$scan be written on the signature s$\{\neg ,\land ,\Diamond \}$sas s$$ \begin{align*} (K)\ \neg (\neg \Diamond (A\land \neg B) \land \neg \Diamond \neg A \land \Diamond \neg B). \end{align*} $$sThe following derivation witnesses s$\vdash +\hspace {-0.1em} \neg (\neg \Diamond (A\land \neg B) \land \neg \Diamond \neg A \land \Diamond \neg B)$s:sAxiom s$\mathbf {T}$sfollows immediately from (s$+\hspace {-0.1em}\Box $sE.) and (s$+\hspace {-0.1em}\rightarrow $sI.), as the derivation of (s$+\hspace {-0.1em}\Box $sE.) does not involve s$\Diamond $s-Elimination rules.sAxiom s$\mathbf {5}$scan be written on the signature s$\{\neg ,\land ,\Diamond \}$sas s$\neg (\Diamond A \land \Diamond \neg \Diamond A)$s. The following derivation witnesses s$\vdash +\hspace {-0.1em} \neg (\Diamond A \land \Diamond \neg \Diamond A)$s:s□sThe derivations of Axioms K and 5 make use of s$\Diamond $s-Elimination rules, but we can apply these axioms within (Weak Inference) by assuming them and then discharging them. To wit, suppose that from s$+\hspace {-0.1em} A$swe can infer s$+\hspace {-0.1em} B$susing Axiom K. Then the following application of (Weak Inference) is, strictly speaking, incorrect.sBut we may still derive s$\oplus \hspace {-0.08em} B$sfrom s$\oplus \hspace {-0.08em} A$sas follows.sHere, the application of (Weak Inference) is licit since it uses s$+\hspace {-0.1em} (K)$sas a dischargeable assumption. If this application of (Weak Inference) occurs within another application of (Weak Inference), one can defer the derivation of s$+\hspace {-0.1em} (K)$sto the outermost proof level. Clearly, this method generalizes to uses of Axiom 5 within applications of (Weak Inference). So we will use the S5 axioms freely from now on. In general, if s$+\hspace {-0.1em} A$sis a theorem—if one can show it from no side premisses, even when using s$\Diamond $s-Eliminations—one may use s$+\hspace {-0.1em} A$seven in proof contexts that disallow s$\Diamond $s-Eliminations by assuming s$+\hspace {-0.1em} A$s, conditionalizing on A and deriving s$+\hspace {-0.1em} A$sat the outermost proof level.s4sModel theory for epistemic multilateral logicsWe have motivated epistemic multilateral logic from an inferentialist perspective and have therefore focused on the proof theory so far. We now proceed to provide a model theory for the logic. We do this by providing a translation of epistemic multilateral logic into modal logic and showing that the result is sound and complete with respect to s$\mathbf {S5}$s.s4.1sSoundnesssWe begin with the translation. The idea is to translate strong assertion with necessity, rejection with possible falsity, and weak assertion with possibility, but note that this is a translation only in the technical sense: it is not intended to provide the intended interpretation of the force-markers. Formally, we define a mapping s$\tau $sfrom s$\mathcal {L}_{EML}$s-formulae to formulae in the language s$\mathcal {L}_{ML}$sof modal logic. s$$ \begin{align*}\tau(\varphi) = \begin{cases} \Box\psi\text{, if }\varphi = +\hspace{-0.1em}\psi\\ \Diamond\neg\psi\text{, if }\varphi = \ominus\hspace{-0.08em}\psi\\ \Diamond\psi\text{, if }\varphi = \oplus\hspace{-0.08em}\psi \end{cases}\end{align*} $$sIf s$\Gamma $sis a set of s$\mathcal {L}_{EML}$s-formulae, we write s$\tau [\Gamma ]$sfor s$\{\tau (\varphi ): \varphi \in \Gamma \}$s.sWe now prove that, under this translation, s$\mathsf {EML}$sis sound with respect to s$\mathbf {S5}$s. That is:sTheorem 4.1s(Soundness).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{EML}$s-formulae. If s$\Gamma \vdash \varphi $sthen s$\tau [\Gamma ] \models ^{\mathbf {S5}} \tau (\varphi )$s.sThe main challenge is to show that the restrictions on (Weak Inference) are effective in ensuring the soundness of the calculus. To this end, let s$\mathsf {EML}^+$sbe the calculus of s$\mathsf {EML}$splus the derivable rules for conjunction under weak assertion, which we recall for convenience.sSince these are derivable in s$\mathsf {EML}$s, the soundness of s$\mathsf {EML}$sis equivalent to the soundness of s$\mathsf {EML}^+$s. We use s$\vdash ^+$sto denote derivability in s$\mathsf {EML}^+$sand write s$\Gamma \vdash ^+_D \varphi $sto indicate that the derivation D witnesses the existence of this derivability relation between s$\Gamma $sand s$\varphi $s.sNow let s$\mathsf {EML}^-$sbe the calculus of s$\mathsf {EML}^+$swithout (Weak Inference) and write s$\vdash ^-$sfor the resulting derivability relation. By inspecting the proofs of (s$+\hspace {-0.1em}\rightarrow $sE.) and (WMP), one can see that (s$+\hspace {-0.1em}\rightarrow $sE.) and (WMP) are derivable in s$\mathsf {EML}^-$s. (The conditional proof rule (s$+\hspace {-0.1em}\rightarrow $sI.) is not derivable in s$\mathsf {EML}^-$s, but this does not matter for present purposes.)sNow, it is straightforward to show that s$\mathsf {EML}^-$sis sound with respect to s$\mathbf {S5}$smodulo the translation s$\tau $s.sTheorem 4.2s(Pre-Soundness).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{EML}$s-formulae and s$\varphi $san s$\mathcal {L}_{EML}$s-formula. If s$\Gamma \vdash ^- \varphi $sthen s$\tau [\Gamma ] \models ^{\mathbf {S5}} \tau (\varphi )$s.sThe proof is a standard induction on the length of derivations and is therefore omitted. Next, we prove the soundness of the full calculus. First, we need an auxiliary definition and a technical lemma. The following definition provides the tools to rewrite a proof D not involving s$\Diamond $s-Eliminations to a proof where s$\Diamond $ss only occur in sentences that translate back to S5-tautologies.sDefinition 1.sSuppose s$\Gamma \vdash ^+_D \varphi $swhere D does not use s$\Diamond $s-Elimination rules, s$\Gamma $scontains only strongly asserted formulae and D uses all premisses in s$\Gamma $s(in particular, then, s$\Gamma $sis finite).sConstruct a mapping s$\pi ^D$sas follows: for each formula Z that occurs anywhere in D pick an unused (in D) propositional atom s$c_Z$sand let s$\pi ^{D}(+\hspace {-0.1em} Z) = +\hspace {-0.1em} c_Z$s, s$\pi ^{D}(\ominus \hspace {-0.08em} Z) = \ominus \hspace {-0.08em} c_Z$sand s$\pi ^{D}(\oplus \hspace {-0.08em} Z) = \oplus \hspace {-0.08em} c_z$s(this is easily possible, since D is finite).sLet s$\Sigma ^D$sbe the set containing exactly the following formulae. For any formulae X and Y occurring anywhere in D: sa.s$+\hspace {-0.1em}(c_{\neg X} \rightarrow \neg c_X)$sand s$+\hspace {-0.1em}(\neg c_X \rightarrow c_{\neg X})$s.sb.s$+\hspace {-0.1em}(c_{\neg \neg X} \rightarrow c_X)$sand s$+\hspace {-0.1em}(c_X \rightarrow c_{\neg \neg X})$s.sc.s$+\hspace {-0.1em}(c_{X \land Y} \rightarrow (c_X \land c_Y))$sand s$+\hspace {-0.1em}((c_X \land c_Y) \rightarrow c_{X \land Y})$s.sd.s$+\hspace {-0.1em} (c_X \rightarrow c_{\Diamond X})$s.se.s$+\hspace {-0.1em}(\Diamond c_{\Diamond X} \rightarrow c_{\Diamond X})$s.sNote that all formulae in s$\Sigma ^D$ssubstitute to S5-tautologies under the map s$c_X \mapsto X$s(i.e. the inverse of s$\pi ^D$s). The following lemma shows that these added assumptions suffice to rewrite the proof D under the translation s$\pi ^D$s.sLemma 4.3.sSuppose s$\Gamma \vdash ^+_D \varphi $swhere D does not use s$\Diamond $s-Elimination rules and s$\Gamma $scontains only strongly asserted formulae. Then there is a derivation s$D'$ssuch that s$\pi ^D[\Gamma ] \cup \Sigma ^D \vdash ^+_{D'} \pi ^D(\varphi )$sand s$D'$scontains no more applications of (Weak Inference) than D.sProof.sWe show by induction on the length n of D that every derivation D can be rewritten to a derivation s$D'$sas in the Lemma. The base case s$n=1$sis trivial since if D has length s$1$s, then s$\varphi \in \Gamma $sand hence also s$\pi ^D(\varphi ) \in \pi ^D[\Gamma ]$s.sSo let D be a derivation of length s$n> 1$s. By the induction hypothesis, we know that all proper subderivations of D can be rewritten as required for the Lemma. Hence, it suffices to show that the last rule applied in D can be rewritten as well. In the induction steps we will use derivations E that are proper subderivations of D. Without loss of generality, we can assume that s$\pi ^E \subseteq \pi ^D$sfor all such cases, i.e. that s$\pi ^E(\varphi ) = \pi ^D(\varphi )$sfor all s$\varphi $soccurring in E. (If s$\pi ^E$sis different, one only needs to apply an appropriate substitution.) In particular, this means that s$\Sigma ^E \subseteq \Sigma ^D$s. Also, we usually write s$+\hspace {-0.1em} c_X$s(s$\ominus \hspace {-0.08em} c_X$s, s$\oplus \hspace {-0.08em} c_X)$sfor s$\pi ^D(+\hspace {-0.1em} X)$s(s$\pi ^D(\ominus \hspace {-0.08em} X)$s, s$\pi ^D(\oplus \hspace {-0.08em} X)$s).sThe coordination principles require no rewriting aside from substituting s$c_X$sfor X. s•sIf the last rule applied in D is (Assertion) to conclude s$\oplus \hspace {-0.08em} X$s, then there is a shorter derivation E such that s$\Gamma \vdash ^+_E +\hspace {-0.1em} X$s. By the induction hypothesis, there is an s$E'$ssuch that s$\pi ^D[\Gamma ] \cup \Sigma ^D \vdash ^+_{E'} +\hspace {-0.1em} c_X$s.We then obtain s$D'$sfrom s$E'$sby a final application of (Assertion) to derive s$\oplus \hspace {-0.08em} c_X$sfrom s$+\hspace {-0.1em} c_X$s. Thus, s$\pi ^D[\Gamma ] \vdash ^+ \pi ^D(+\hspace {-0.1em} X)$s.s•s(Weak Inference), (Rejection), (SRs$_1$s) and (SRs$_2$s) can be immediately transformed like (Assertion).sThe other rules require some work and use the premisses added to s$\Sigma ^D$s. We only show a selection of cases, since the method is uniform. s•sThe clauses (a.), (b.) and (c.) of the construction of s$\Sigma ^D$scan be used to translate applications of s$(+\hspace {-0.1em}\land $sI.), s$(+\hspace {-0.1em}\land $sE.), (s$\oplus \hspace {-0.08em}\land $sI.), (s$\oplus \hspace {-0.08em}\land $sE.), s$(\ominus \hspace {-0.08em}\neg $sE.), s$(\ominus \hspace {-0.08em}\neg $sI.), (s$\oplus \hspace {-0.08em}\neg $sI.) and (s$\oplus \hspace {-0.08em}\neg $sE.). This is easy to check: the formulae in s$\Sigma ^D$sin combination with (s$+\hspace {-0.1em}\rightarrow $sE.) and (WMP) allow us to make the appropriate inferences. To illustrate the method, we cover the (s$\ominus \hspace {-0.08em}\neg $sE.) case: if D concludes with an application of (s$\ominus \hspace {-0.08em}\neg $sE.) to move from s$\ominus \hspace {-0.08em}\neg Z$sto s$\oplus \hspace {-0.08em} Z$s, there is a derivation E such that s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+_E \ominus \hspace {-0.08em} c_{\neg Z}$s. Then obtain s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+ \oplus \hspace {-0.08em} c_Z$sas follows:s•sIt is left to treat applications of (s$\oplus \hspace {-0.08em}\Diamond $sI.) and (s$+\hspace {-0.1em}\Diamond $sI.) in D. So suppose first that D concludes with an application of (s$\oplus \hspace {-0.08em}\Diamond $sI.) to move from s$\oplus \hspace {-0.08em} Z$sto s$\oplus \hspace {-0.08em}\Diamond Z$s. By the induction hypothesis, there is a derivation E such that s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+_E \oplus \hspace {-0.08em} c_Z$s. We then obtain s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+ \oplus \hspace {-0.08em} c_{\Diamond Z}$sas follows:sNext, suppose that in D concludes with an application of (s$+\hspace {-0.1em}\Diamond $sI.) to move from s$\oplus \hspace {-0.08em} Z$sto s$+\hspace {-0.1em}\Diamond Z$s. By the induction hypothesis, there is a derivation E such that s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+_E \oplus \hspace {-0.08em} c_Z$s. We then obtain s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+ +\hspace {-0.1em} c_{\Diamond Z}$sas follows:sThis concludes the induction. □sNote that no applications of (Weak Inference) were added when translating D to s$D'$s, since (WMP) and (s$+\hspace {-0.1em}\rightarrow $sE.) can be derived in s$\mathsf {EML}^+$sfrom (s$\oplus \hspace {-0.08em}\land $sE.) without using (Weak Inference).sNow we are ready to prove the Soundness of the full calculus.sProof of Theorem 4.1sWe prove the statement of the theorem for s$\vdash ^+$s, which immediately entails the theorem. Without loss of generality, we may assume that s$\Gamma $scontains only strongly asserted formulae: if it contains s$\ominus \hspace {-0.08em} A$s, one can substitute with s$+\hspace {-0.1em}\Diamond \neg A$s, and if it contains s$\oplus \hspace {-0.08em} A$s, one can substitute with s$+\hspace {-0.1em}\Diamond A$s.sThe proof proceeds by induction on the number n of times that (Weak Inference) is applied in a derivation. The base case s$n=0$sis exactly Theorem 4.2.sSuppose that Soundness holds for all derivations D in which (Weak Inference) is applied less than n times. We want to show that derivations with n applications are sound. Let D be a derivation with n applications of (Weak Inference) and consider any subderivation s$D'$sthat ends in one such application. Note that we do not need to treat applications of (Weak Inference) to conclude s$\bot $s, since they are equivalent to the case in which B is s$p\land \neg p$sfor an arbitrary p. Thus, the local proof context is this:sIn this situation, s$D'$sdoes not use s$\Diamond $s-Elimination rules, and there is a finite subset s$\Gamma ' \subseteq \Gamma $ssuch that all formulae in s$\Gamma '$sare signed by s$+$s, such that s$\Gamma ',+\hspace {-0.1em} A \vdash ^+_{D'} +\hspace {-0.1em} B$sand s$\Gamma ' \vdash ^+ \oplus \hspace {-0.08em} A$s. To conclude the proof, it suffices to show that s$\tau [\Gamma '] \models ^{\mathbf {S5}} \tau (\oplus B)$s.sFor readability we will henceforth omit mentioning s$\tau $s, so that, say, s$\Gamma ' \models ^{\mathbf {S5}} +\hspace {-0.1em} A$sis understood to stand for s$\tau [\Gamma ']\models ^{\mathbf {S5}} \tau (+\hspace {-0.1em} A)$s. Since s$D'$scontains less than n applications of (Weak Inference), by the induction hypothesis we have that s(I1)s$$ \begin{align} \Gamma', +\hspace{-0.1em} A \models^{\mathbf{S5}} +\hspace{-0.1em} B \end{align} $$sand that s(I2)s$$ \begin{align} \Gamma' \models^{\mathbf{S5}} \oplus\hspace{-0.08em} A. \end{align} $$sThe proof that s$\Gamma ' \models ^{\mathbf {S5}} \Diamond B$snow proceeds in two steps. si.sWe show that s$\Gamma ' \models ^{\mathbf {S5}} \oplus \hspace {-0.08em} B$sif A and B are s$\mathcal {L}_{PL}$s-formulae and s$\Gamma '$scan be split in s$\Gamma ' = \Delta \cup \Theta $ssuch that: for all s$+\hspace {-0.1em} C \in \Delta $s, C is an s$\mathcal {L}_{PL}$s-formula; and for all s$+\hspace {-0.1em} D \in \Theta $s, s$D = \Diamond X \rightarrow X$sfor some s$\mathcal {L}_{PL}$s-formula X.sii.sBy Lemma 4.3, any other application of (Weak Inference) can be reduced to (i.).sSo, first assume that A and B are s$\mathcal {L}_{PL}$s-formulae and s$\Gamma ' = \Delta \cup \Theta $sas above. We need to show that for any model s$V = \langle W^V,R^V,V^V,w^V \rangle $sof s$\Gamma '$s, we have that s$V, w^V \Vdash \Diamond B$s, where s$\Vdash $sis the usual satisfaction relation for worlds in modal logic. Assume for reductio that there is a counterexample, i.e. a V with s$V,w^{V} \Vdash \Box \neg B$s. By the induction hypothesis and in particular (I2), we also have that s$V,w^V \Vdash \Diamond A$s. It follows that s$V,w^V \Vdash \Diamond (A\land \neg B)$s. Let s$v \in W^{V}$sbe a witness, i.e. s$V,v \Vdash A \land \neg B$s. Note that for all s$+\hspace {-0.1em} C \in \Delta $swe have that s$V,v^{V} \Vdash C$s, since s$V,w^{V} \Vdash \Box C$s.sNow consider the model s$V'$ssuch that: s$W^{V'} = \{v\}$s, s$V^{V'}(v) = V^{V}(v)$s, s$R^{V'} = \{(v,v)\}$s, s$w^{V'} = v$s. Note that all C with s$+\hspace {-0.1em} C \in \Delta $sare assumed to be s$\mathcal {L}_{PL}$s-formulae. That is, the fact that s$V,v^V \Vdash C$sis dependent only on the valuation s$V(v)$sand not on any other worlds in s$W^V$s. Thus it is also the case that s$V',w^{V'} \Vdash C$sfor all C with s$+\hspace {-0.1em} C \in \Delta $s. For the same reason, s$V',w^{V'} \Vdash A \land \neg B$s. Also note that, since s$V'$shas precisely one world, s$V',w^{V'} \Vdash \Diamond X$siff s$V',w^{V'} \Vdash X$s. So s$V',w^{V'} \Vdash \Diamond X \rightarrow X$sfor any X. Thus s$V' \Vdash \Theta $s.sHence s$V',w^{V'} \Vdash \Gamma '$s. By construction, s$V',w^{V'} \Vdash \Box A$sand s$V', w^{V'} \Vdash \neg \Box B$s. So s$V'$sis a countermodel to s$\Gamma '\cup \{+\hspace {-0.1em} A\} \models ^{\mathbf {S5}} +\hspace {-0.1em} B$s, but this is true by induction (I1). Contradiction. Thus there is no such V. This shows (i.).sFor (ii.), we relax our assumption so that A, B and the formulae in s$\Gamma '$smay be arbitrary. By Lemma 4.3, we have a derivation s$D''$ssuch that s$\pi ^{D'}[\Gamma '] \cup \Sigma ^{D'}, +\hspace {-0.1em} c_A \vdash ^+_{D''} +\hspace {-0.1em} c_B$s(writing s$+\hspace {-0.1em} c_A$sfor s$\pi ^{D'}(+\hspace {-0.1em} A)$sand same for B).sNote that since s$\pi ^{D'}$smaps everything to s$\mathcal {L}_{PL}$s-formulae, the elements of s$\pi ^{D'}[\Gamma '] \cup \Sigma ^{D'}$sare as described in (i): s$\pi ^{D'}[\Gamma '] \cup \Sigma ^{D'} = \Delta \cup \Theta $swith s$\Theta $sbeing exactly all formulae added in clause (e.) in the construction of s$\Sigma ^{D'}$s(Definition 1). Thus we obtain s$\pi ^{D'}[\Gamma '], \Sigma ^{D'} \models ^{\mathbf {S5}} \Diamond c_B$sby the argument of (i). Note that the argument of (i) rests on the induction hypothesis, but this is still licit here since s$D''$sdoes not contain more applications of (Weak Inference) than s$D'$s(by Lemma 4.3).sNow let s$\Sigma = (\pi ^D)^{-1}[\Sigma ^{D'}]$s. Since s$\mathbf {S5}$sis closed under uniform substitution, s$\Gamma '\cup \Sigma \models ^{\mathbf {S5}} \Diamond B$s. But s$\Sigma $scontains only s$\mathbf {S5}$s-tautologies (by inspection of Definition 1). Hence s$\Gamma ' \models ^{\mathbf {S5}} \Diamond B$s.□sIt is worth noting why the argument above does not work for the rules excluded from (Weak Inference), i.e. (s$+\hspace {-0.1em}\Diamond $sE.) and (s$\oplus \hspace {-0.08em}\Diamond $sE.). The reason is that translating these rules in the proof of Lemma 4.3 would require us to add s$+\hspace {-0.1em} (c_{\Diamond Z} \rightarrow c_Z)$sto s$\Sigma ^D$sin Definition 1. This, however, does not substitute to an s$\mathbf {S5}$s-tautology, so the final step in the soundness proof would fail.s4.2sCompletenesssWe now show that, modulo the translation s$\tau $sdefined above, s$\mathsf {EML}$sis also complete with respect to s$\mathbf {S5}$s.sTheorem 4.4s(Completeness).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{EML}$s-formulae and s$\varphi $san s$\mathcal {L}_{EML}$s-formula. If s$\tau [\Gamma ] \models ^{\mathbf {S5}} \tau (\varphi )$sthen s$\Gamma \vdash \varphi $s.sThis is shown by a model existence theorem. The construction of a canonical term model has to respect the difference between derivations that use s$\Diamond $s-Eliminations and those that do not. To this end, we need some additional definitions. We write s$\vdash ^*$sto denote provability in s$\mathsf {EML}$swithout the rules for s$\Diamond $s-Elimination. We say that a set s$\Gamma $sis S-consistent if s$\Gamma $scontains only strongly asserted formulae and s$\Gamma \not \vdash ^* \bot $s. And we say that s$\Gamma $sis S-inconsistent if it contains only strongly asserted formulae and s$\Gamma \vdash ^* \bot $s. Note that there are inconsistent S-consistent sets, e.g. s$\{+\hspace {-0.1em} p, +\hspace {-0.1em}\Diamond \neg p\}$s.sIn the typical canonical construction, one takes maximally consistent sets of formulae to be the worlds. We will instead take maximally S-consistent sets of formulae. In the construction, we shall use the following technical lemmas.sLemma 4.5.sIf s$\Gamma \cup \{+\hspace {-0.1em} A\}$sis S-inconsistent, then s$\Gamma \vdash ^* +\hspace {-0.1em}\neg A$s.sProof.sThis is just another way to write (s$+\hspace {-0.1em}\neg $sI.). □sLemma 4.6.sIf s$\Gamma $scontains only strongly asserted formulae and s$\Gamma \vdash +\hspace {-0.1em}((\bigwedge _{i < n} \neg B_i) \rightarrow \neg A)$s, then s$\Gamma \vdash +\hspace {-0.1em} \Diamond A\rightarrow (\bigvee _{i < n}\Diamond B_i)$s.sProof.sThis follows from the fact that s$\mathsf {EML}$sextends classical logic (Theorem 3.3) and proves all s$\mathbf {S5}$saxioms (Theorem 3.5). □sNow we are ready to demonstrate a model existence result.sTheorem 4.7s(Model Existence).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{EML}$s-formulae. If s$\Gamma $sis consistent, then there is an s$\mathbf {S5}$s-model M such that s$M \models \tau [\Gamma ]$s.sProof.sLet s$Cl^+\hspace {-0.1em}(\Gamma )$sconsists of all strongly asserted formulae in the closure of s$\Gamma $sunder derivability s$\vdash $sin s$\mathsf {EML}$s. Concisely, s$Cl^+\hspace {-0.1em}(\Gamma ) = \{+\hspace {-0.1em} A \mid \Gamma \vdash +\hspace {-0.1em} A\}$s. Moreover, let s$\mathcal {E} = \{\Delta \mid \Delta $sis a maximal S-consistent extension of s$Cl^+\hspace {-0.1em}(\Gamma )\}$sand define a model s$M = \langle W,R,V\rangle $sas follows. s•s$W = \mathcal {E}$s.s•s$wRv$siff (for all s$+\hspace {-0.1em} A \in v : +\hspace {-0.1em}\Diamond A \in w$s).s•s$V(w) = \{p \mid +\hspace {-0.1em} p \in w\}$s.sNow we show by induction on the complexity of sentences A that: s$+\hspace {-0.1em} A \in w$siff s$M,w \Vdash A$s. The cases for atomic A and s$A = B \land C$sare straightforward, so we only cover negation and the modal. s•sIf s$+\hspace {-0.1em} \neg A \in w$s, then s$+\hspace {-0.1em} A \notin w$s. By the induction hypothesis, s$M, w \not \Vdash A$s. Thus s$M,w \Vdash \neg A$s. Conversely, if s$M,w \Vdash \neg A$s, then s$M,w \not \Vdash A$s, so s$+\hspace {-0.1em} A\notin w$sby the induction hypothesis. Since w is a maximally S-consistent set, this means that s$+\hspace {-0.1em} A$sis S-inconsistent with w. By Lemma 4.5, s$+\hspace {-0.1em} \neg A \in w$s.s•sSuppose s$+\hspace {-0.1em}\Diamond A \in w$s. We first show that s$+\hspace {-0.1em} A$sis S-consistent with s$Cl^+\hspace {-0.1em}(\Gamma )$s. Towards a contradiction, assume s$+\hspace {-0.1em} A$sis S-inconsistent with s$Cl^+\hspace {-0.1em}(\Gamma )$s. That is, s$Cl^+\hspace {-0.1em}(\Gamma ) \vdash +\hspace {-0.1em}\neg A$sby Lemma 4.5. Because s$Cl^+\hspace {-0.1em}(\Gamma )$sis closed under s$\vdash $s, this means that s$+\hspace {-0.1em}\neg \Diamond A \in Cl^+\hspace {-0.1em}(\Gamma )$sby (s$+\hspace {-0.1em}\Box $sI.). But then s$+\hspace {-0.1em}\Diamond A$sis S-inconsistent with s$Cl^+\hspace {-0.1em}(\Gamma )$s, hence s$+\hspace {-0.1em}\Diamond A \notin w$s. Contradiction.sThis establishes that there is a world (i.e. a maximally S-consistent extension of s$Cl^+\hspace {-0.1em}(\Gamma )$s) that contains s$+\hspace {-0.1em} A$s. We now show that one such world v is accessible from w, i.e. that s$wRv$s.sLet s$\{B_i \mid i \in \omega \}$sbe the sentences such that s$+\hspace {-0.1em}\Diamond B_i \notin w$s. We need a v such that s$+\hspace {-0.1em} A \in v$sand for all i, s$+\hspace {-0.1em} B_i \notin v$s. Note that if s$+\hspace {-0.1em}\Diamond B_i \notin w$s, then s$+\hspace {-0.1em}\Diamond B_i$sis S-inconsistent with w, so s$+\hspace {-0.1em}\neg \Diamond B_i \in w$s. In particular also s$+\hspace {-0.1em} \neg B_i \in w$ssince s$Cl^+\hspace {-0.1em}(\Gamma )$scontains Axiom T. Thus, s$Cl^+\hspace {-0.1em}(\Gamma ) \cup \{+\hspace {-0.1em} \neg B_i \mid i \in \omega \}$sis S-consistent, since it is a subset of w.sNow, if s$Cl^+\hspace {-0.1em}(\Gamma ) \cup \{+\hspace {-0.1em} \neg B_i \mid i \in \omega \} \cup \{+\hspace {-0.1em} A\}$sis S-consistent, there is a v as needed. Towards a contradiction, assume this set is S-inconsistent. By Lemma 4.5, this means that s$Cl^+\hspace {-0.1em}(\Gamma ) \cup \{+\hspace {-0.1em} \neg B_i \mid i \in \omega \} \vdash ^* +\hspace {-0.1em}\neg A$s. It follows by (s$+\hspace {-0.1em}\rightarrow $sI.) that there is a finite set of s$B_i$ss (with all s$i < n$s, without loss of generality) such that s$Cl^+\hspace {-0.1em}(\Gamma ) \vdash +\hspace {-0.1em}((\bigwedge _{i<n}\neg B_i) \rightarrow \neg A)$s. By Lemma 4.6, s$\Gamma \vdash +\hspace {-0.1em} \Diamond A\rightarrow (\bigvee _{i < n}\Diamond B_i)$s. Since s$\Diamond A \in w$sthis means that s$+\hspace {-0.1em}(\bigvee _{i < n}\Diamond B_i)\in w$s. But we saw that for any s$i<n$s, s$+\hspace {-0.1em}\neg \Diamond B_i \in w$s. Hence w is S-inconsistent. Contradiction.sThus, there is a s$v \in W$swith s$+\hspace {-0.1em} A \in v$sand s$wRv$s. By the induction hypothesis, s$M,v \Vdash A$s. Thus, s$M,w \Vdash \Diamond A$s.sConversely, suppose s$M,w \Vdash \Diamond A$s. Then there is a v, s$wRv$ssuch that s$M,v \Vdash A$s. By the induction hypothesis, s$+\hspace {-0.1em} A \in v$s. By definition of R, s$+\hspace {-0.1em}\Diamond A \in w$s.sLet s$w \in W$sbe arbitrary. Without loss of generality, we can write s$\Gamma $swith all formulae signed by s$+$s(see the proof of Theorem 4.1). Since s$\Gamma \subseteq w$s, it follows that s$M,w \Vdash \varphi $sfor each s$\varphi \in \tau [\Gamma ]$s.sIt remains to show that s$\langle W,R,V\rangle $sis an s$\mathbf {S5}$smodel. This follows from the fact that the s$\mathbf {KT5}$saxioms are contained in s$Cl^+\hspace {-0.1em}(\Gamma )$s.□sIntuitively, the reason why the worlds of the term model may denote inconsistent sets of formulae is as follows. The inference s$+\hspace {-0.1em} A \vdash +\hspace {-0.1em}\Box A$smust be excluded when computing maximal consistent sets. That is, if we were taking consistent sets (instead of S-consistent sets) as the worlds of the term model, then whenever s$+\hspace {-0.1em} A \in w$sfor some consistent w, it would also be the case that s$+\hspace {-0.1em}\Box A \in w$s. But such sets are not useful as worlds in a canonical model, since they can only see themselves according to the definition of R in the canonical model construction. The notion of S-consistency takes care of this issue.sOne may also wonder why the same argument does not work when we weaken the demand that s$\Gamma $sbe consistent in the statement of Theorem 4.7 to s$\Gamma $sbeing merely S-consistent. The stronger assumption of consistency is used in the step for s$+\hspace {-0.1em}\Diamond A$sin the induction on the complexity of sentences in the proof of the theorem. For this step of the proof relies on the fact that s$Cl^+\hspace {-0.1em}(\Gamma )$sis closed under the derivability relation in the full s$\mathsf {EML}$scalculus and in particular under (s$+\hspace {-0.1em}\Box $sI.). But if we were to allow inconsistent S-consistent s$\Gamma $s, then closure under the full s$\mathsf {EML}$scalculus would result in the trivial theory. Thus, only consistent s$\Gamma $shave a canonical model (but the worlds in this canonical model may be inconsistent sets).s4.3sSome corollariessThe following is a noteworthy corollary of the completeness theorem.sProposition 4.8.sFor any s$\mathcal {L}_{ML}$s-formula A and set of s$\mathcal {L}_{ML}$s-formulae s$\Gamma $s, s$\{\Box B \mid B\in \Gamma \} \models ^{\mathbf {S5}} \Box A$siff s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash +\hspace {-0.1em} A$s.sProof.sImmediate from Soundness and Completeness. □sThat is, the s$\mathsf {EML}$slogic of strong assertion is the logic that preserves s$\mathbf {S5}$s-validity. Schulz (2010, p. 389) noted the same result about Yalcin’s (2007) informational consequence (IC). That is, s$\Gamma \models ^{\text {IC}} A$siff s$\{\Box B \mid B\in \Gamma \} \models ^{\mathbf {S5}} \Box A$s. Thus, we may conclude, the logic of strong assertion coincides with informational consequence.sThere is a caveat, however. Yalcin’s semantics includes a conditional s$\rightarrow $sthat is not a material conditional, but a version of a restricted strict conditional (Yalcin, 2007, p. 998). Clearly, then, Yalcin’s informational consequence only preserves s$\mathbf {S5}$s-validity on the signature s$\{\neg ,\land ,\Diamond \}$s. Hence the s$\mathsf {EML}$slogic of strong assertion only corresponds to the nonimplicative fragment of informational consequence. In either logic, however, the material conditional is definable from s$\neg $sand s$\land $s.sNow, since in s$\mathbf {S5}$stautological truths are validities, Proposition 4.8 entails that the strongly asserted theorems of s$\mathsf {EML}$sare exactly the s$\mathbf {S5}$s-tautologies. This shows the converse direction of Theorem 3.5 to establish Theorem 4.9).sTheorem 4.9.s$\models ^{\mathbf {S5}} A$siff s$\vdash +\hspace {-0.1em} A$s.sFrom these results, we also obtain two corollaries about the logic of rejection in s$\mathsf {EML}$s.sProposition 4.10.sFor any s$\mathcal {L}_{ML}$s-formula A and finite set of s$\mathcal {L}_{ML}$s-formulae s$\Gamma $s, s$\{\Box B \mid B\in \Gamma \} \models ^{\mathbf {S5}} \Box A$siff s$\ominus \hspace {-0.08em} A\vdash \ominus \hspace {-0.08em}\bigwedge _{B \in \Gamma } B$s.sProposition 4.11.sFor any s$\mathcal {L}_{ML}$s-formula A, s$\Box A\models ^{\mathbf {S5}} \bot $siff s$\vdash \ominus \hspace {-0.08em} A$s.sProposition 4.11 mentions s$\Box A$s, whereas Theorem 4.9 does not. This difference is due to the fact that s$\models ^{\mathbf {S5}} A$siff s$\models ^{\mathbf {S5}} \Box A$s, whereas, in general, it is not the case that s$A \models ^{\mathbf {S5}} \bot $siff s$\Box A\models ^{\mathbf {S5}} \bot $s. A counterexample to the latter is obtained by letting A be s$p \land \Diamond \neg p$s.sBut what is the relation of s$\mathsf {EML}$sto the consequence relation s$\models ^{\mathbf {S5}}$swith premisses? We can approximate this relation from below by considering s$\vdash ^*$s, the derivability relation of s$\mathsf {EML}$swithout rules for s$\Diamond $s-Elimination.sProposition 4.12.sLet s$\Gamma $sbe a set of s$\mathcal {L}_{ML}$s-formulae and A an s$\mathcal {L}_{ML}$s-formula. If s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash ^* +\hspace {-0.1em} A$s, then s$\Gamma \models ^{\mathbf {S5}} A$s.sProof.sSince derivations in s$\mathsf {EML}$sare finite, we may suppose that s$\Gamma $sis finite. Assume that s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash ^* +\hspace {-0.1em} A$s. By (s$+\hspace {-0.1em}\rightarrow $sI.), this means that s$\vdash ^* +\hspace {-0.1em} \left(\left(\bigwedge _{B\in \Gamma } B\right) \rightarrow A\right)$s. By Theorem 4.9, s$\models ^{\mathbf {S5}} \Box \left(\left(\bigwedge _{B\in \Gamma } B\right) \rightarrow A\right)$s, which entails s$\models ^{\mathbf {S5}} (\bigwedge _{B\in \Gamma } B) \rightarrow A$sby Axiom T and modus ponens. □sHowever, we used s$\Diamond $s-Elimination rules to derive Axioms K and 5. Nonetheless, one can recapture S5 using the derivability relation s$\vdash ^{**}$sthat results from adding to s$\vdash ^*$sthe following restricted rule of s$\Diamond $s-Elimination.sThe restriction serves to recover the Necessitation rule in the proof of the following theorem.sTheorem 4.13.sLet s$\Gamma $sbe a set of s$\mathcal {L}_{ML}$s-formulae and A an s$\mathcal {L}_{ML}$s-formula. It is the case that s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash ^{**} +\hspace {-0.1em} A$siff s$\Gamma \models ^{\mathbf {S5}} A$s.sProof.sWe first prove the right-to-left direction. Inspection of the proofs in §3.4 reveals that s$\vdash ^{**}$sderives all S5 axioms. Furthermore, the following derivation shows that s$\vdash ^{**}$ssatisfies a version of the Necessitation rule, i.e. that if s$\emptyset \vdash ^{**} +\hspace {-0.1em} A$s, then s$\emptyset \vdash ^{**} +\hspace {-0.1em}\Box A$s.sThe application of (s$\oplus \hspace {-0.08em}\Diamond $sE.*) is legitimate here, since the (SRs$_2$s) derivation uses no premisses or assumptions other than the one being discharged and s$+\hspace {-0.1em} A$s(which is a theorem by assumption.)sNow, since s$\Gamma \models ^{\mathbf {S5}} A$sand S5 is complete with respect to its model theory, there is a natural deduction proof of A that requires only the premisses s$\Gamma $s, the S5 Axioms, the Necessitation rule and modus ponens. By the above, this proof can be performed in s$\vdash ^{**}$sto derive s$+\hspace {-0.1em} A$sfrom s$\{+\hspace {-0.1em} B \mid B \in \Gamma \}$s. This concludes the right-to-left direction.sFor left-to-right direction, the proof of Proposition 4.12 works, but the following step is nontrivial: s(*)sAssume that s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash ^{**} +\hspace {-0.1em} A$s. By (s$+\hspace {-0.1em}\rightarrow $sI.), this means that s$\vdash ^{**} +\hspace {-0.1em} \left(\left(\bigwedge _{B\in \Gamma } B\right) \rightarrow A\right)$s.sIt is not clear that (s$+\hspace {-0.1em}\rightarrow $sI.) can be applied here, since under s$\vdash ^{**}$sone may apply the rule (s$\oplus \hspace {-0.08em}\Diamond $sE.*), and we have not established that this rule is permitted in (s$+\hspace {-0.1em}\rightarrow $sI.). To see that the step (*) is nonetheless correct here, note that any occurrence of (s$\oplus \hspace {-0.08em}\Diamond $sE.*) can only be in an (SRs$_2$s) subderivation that establishes s$+\hspace {-0.1em} X$sfor some sentence X. Let R be the set of all s$+\hspace {-0.1em} X$sthat are established this way anywhere in the proof. Then the following version of (*) is obviously correct, as all subderivations using (s$\oplus \hspace {-0.08em}\Diamond $sE.*) can be replaced by a premiss from R. sAssume that s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash ^{**} +\hspace {-0.1em} A$s. By (s$+\hspace {-0.1em}\rightarrow $sI.), this means that s$R \vdash ^{**} +\hspace {-0.1em} \left(\left(\bigwedge _{B\in \Gamma } B\right) \rightarrow A\right)$s.sBut all members of R are theorems under s$\vdash ^{**}$s, since they can be established by a Smileian reductio that does not require any side premisses. Thus the step (*) is correct. □s5sYalcin sentences and their generalizationss5.1sYalcin sentencessYalcin (2007) famously observed that sentences of the form p and it might not be that p sound bad even when occurring in certain embedded environments, such as suppositions, in which Moore-paradoxical sentences (p and I don’t believe that p) sound fine. The following proof in EML shows s$+\hspace {-0.1em} p \land \Diamond \neg p$sto be absurd.sThis proof shows that uttering the sentence p and it might not be that p immediately commits one to having incompatible attitudes (namely, assent to p and dissent from p), which is absurd. Thus it also explains why suppose that p and might not p sounds odd, as to suppose p and it might not be that p is to suppose something manifestly absurd. This is the argument we give in Incurvati & Schlöder (2019), but we can clarify why precisely it is absurd to suppose p and it might not be that p.sTo formalize supposition, we add a new primitive force-marker s$\mathcal {S}$sto our language. In English, supposition refers both to an attitude and to its expression (Green, 2000, pp. 377–378). That is, the speech act of supposing that A expresses the attitude of supposing that A. Accordingly, s$\mathcal {S}A$sstands for the speech act of supposing A, performed through locutions such as suppose that A, and expresses the attitude of supposing A.sFollowing Stalnaker (2014), pp. 150–151, we take the supposition of A to consists in a proposal to add A to the common ground, but to do so temporarily.s6sIn supposing that A, one is not committing to A, but is probing what happens if one were to commit to A. That is, one is checking what the consequences would be of adding A to the common ground.s7sFor this process to work as desired, the internal logic of supposition must be the same as the logic of strong assertion. This sanctions the coordination principle (s$\mathcal {S}$s-Inference), which states that the suppositional consequences of a suppositional context mirror the strongly assertoric consequences of the corresponding strongly assertoric context.s8sThis coordination principle immediately implies that suppose p and might not p is absurd. For as shown above, s$+\hspace {-0.1em} p\land \Diamond \neg p$sentails incompatible attitudes, which is absurd. By (s$\mathcal {S}$s-Inference), this means that supposing p and might not p is absurd as well, i.e. one derives s$\bot $sfrom s$\mathcal {S}(p\land \Diamond \neg p)$s.sThe absurdity of suppose p and might not p does not quite explain its infelicity, since not all absurd suppositions sound bad. One may felicitously suppose certain logical contradictions. For instance, someone not familiar with the derivability of Peirce’s Law in classical logic may felicitously suppose its negation. However, to see that the negation of Peirce’s Law is absurd requires a complex argument. By contrast, anyone grasping the meaning of and and not will immediately recognize the absurdity of, say, p and not p. This explains why p and not p sounds bad and continues to do so in embedded contexts. The same holds for p and might not p: its absurdity can be immediately inferred by applying the meaning-conferring rules of and, not and might. This absurdity is therefore manifest to anyone who grasps the meaning of these expressions, which explains why suppose p and might not p is infelicitous.sIn addition to explaining the infelicity of Yalcin sentences under suppose, our account has the resources to explain why Moore sentences sound bad in ordinary contexts but cease to do so in suppositional ones. For while strongly asserting p and I do not believe that p is improper, it is not absurd in the sense of s$+\hspace {-0.1em} p \land \neg Bp$sentailing s$\perp $s. In our view, uttering a Moore sentence is infelicitous because it violates one of the preparatory conditions of strong assertion, e.g. that one should know or believe what one strongly asserts. But such preparatory conditions do not factor into the commitments undertaken by a strong assertion. Thus, the violation of these preparatory conditions does not proof-theoretically reduce to the speaker having both strongly asserted and rejected the same proposition. Hence, suppose that p and I do not believe that p is not predicted to be absurd. Since supposition and strong assertion have different preparatory conditions—in particular, one need not believe what one supposes—suppose that p and I do not believe that p is not predicted to be improper either.sThe rule (s$\mathcal {S}$s-Inference) is sufficient to explain the relevant data about Yalcin sentences, but to give a complete account of supposition further rules may need to be added. For our present project, there are more immediate concerns.s5.2sGeneralized Yalcin sentencessPaolo Santorio (2017) observed that sentences like (3) seem to sound as bad as the original Yalcin sentences, and continue to do so when embedded under suppose. s(3)s(If p, then it might be that q) and (if p, then s$\neg $sq).sWe have seen that there are good reasons to treat Yalcin sentences as semantically contradictory. But then, Santorio argues, one should also treat sentences like (3) as semantically contradictory. However, it is difficult to find a conditional s$>$sand a consequence relation s$\models $ssuch that s$p>\Diamond q \land p>\neg q \models \bot $s. This is because if s$>$scan be vacuous, then the incompatibility of s$\Diamond q$sand s$\neg q$sdoes not force us to conclude s$\perp $s, but only that p satisfies the vacuity-condition for s$>$s.sSantorio (2017) succeeds in defining such a s$>$sand s$\models $s, but we want to offer a different diagnosis of the problem posed by (3), one which does not require adopting a revised notion of consequence. Observe that an utterance of (4), which does not contain an epistemic modal, already sounds odd. s(4)s(If p, then q) and (if p, then s$\neg $sq).sThe reason why (3) and (4) both sound odd seems to be similar. To wit, both utterances appear to prompt one to suppose that p, i.e. to consider what follows should p be the case—but to do so is absurd, given the information the utterances contain. That is, we will argue that (3) and (4) sound bad for the same reason that (5a) and (5b) sound bad—their antecedents cannot be supposed. s(5)sa. If (p and not p), then q.sb. If (p and it might not be p), then q.sWe propose to explain the infelicitousness of (3), (4) and (5) by using a notion of supposability based on our characterization of supposition s$\mathcal {S}$s. We have followed Stalnaker in taking the supposition of A to be a proposal to temporarily update the common ground with A. The supposability of A in a given context is the possibility of supposing A in that context.sDefinition 2.sLet A and C be a s$\mathcal {L}_{ML}$s-formulae. We say that A is supposable in (context) C if s$\mathcal {S}(C\land A) \not \vdash \bot $s.sAs we explain below, a strong assertion of an indicative conditional pragmatically presupposes the supposability of the conditional’s antecedent in a context C that corresponds to the current common ground updated with the strong assertion’s content. So, in the case of (3), the supposability of p is to be checked with respect to a context C that contains at least s$(p> \neg q) \land (p> \Diamond q)$s. But we have that s$\mathcal {S} (p \land (p> \neg q) \land (p> \Diamond q)) \vdash \bot $s(assuming that s$>$ssatisfies modus ponens), so the presupposition fails.sWhy should the indicative conditional have such a presupposition? Following Stalnaker (1978), the pragmatic presuppositions of a strong assertion include all the information that can be inferred from the performance of the strong assertion itself. In particular, a strong assertion presupposes that the context is such that its essential effect (i) changes the context in a nontrivial and well-defined way and (ii) everyone in the conversation can compute this change. Now, when one proposes to update the common ground with a conditional, one proposes to change it in such a way that if the antecedent is added to the common ground, its consequent should be added too. Everyone must be able to compute what this change amounts to, as this is what they base their decision to accept or reject the update proposal on. Thus, everyone must be able to consider the common ground updated with the conditional and then temporarily (for the purpose of deliberation) add the conditional’s antecedent and arrive at a well-defined result. This may be seen as a discoursive analogue of the Ramsey test. Hence, strong assertions of conditional content pragmatically presuppose that the conditional’s antecedent is supposable in the context of the current common ground updated with the assertion’s content. But this presupposition cannot be met for the conditionals (3), (4) and (5). Many have claimed that conditionals presuppose, in some sense, that their antecedents are possible (Gillies, 2010; Mandelkern & Romoli, 2017; Crespo, Karawani, & Veltman, 2018). Our argument shows that supposability is the right way to specify what kind of possibility should be meant here, at least within the Stalnakerian framework.sOne might reply that our pragmatic explanation is not general enough. For Santorio’s generalized Yalcin sentence (3) also sounds bad when the conditionals it contains are read as subjunctives. And, the reply goes, the assertion of a subjunctive conditional does not have the supposability presupposition associated with the assertion of its indicative counterpart. While indicative conditionals change the common ground in a way that can be evaluated by provisionally updating the common ground with their antecedents, subjunctive conditionals change the common ground in a way that can be evaluated by provisionally revising the common ground with their antecedent (Stalnaker, 1968). Thus, the strong assertion of a subjunctive conditional does not presuppose that its antecedent be supposable in the common ground updated with the strong assertion’s content, since some of this content might be revised in order to suppose the antecedent.sThe reply is unsuccessful. Although the assertion of a subjunctive does not have the same supposability presupposition as the assertion of the corresponding indicative, it does have a supposability presupposition. And this presupposition cannot be met when Santorio’s (3) is read as a subjunctive conditional. In particular, since not all information in the common ground is up for revision when considering a subjunctive antecedent, the assertion of a subjunctive conditional presupposes that its antecedent be supposable in C, where C is the nonrevisable part of the common ground updated with the conditional. Now consider the strong assertion (if it were p, then q) and (if it were p, then not q). Clearly, p is not supposable in the context s$C'$sthat results from updating C with the strong assertion, since s$\mathcal {S}(C' \land p) \vdash \mathcal {S}(p\land \neg p)$s, which entails s$\bot $s. The same goes for (if it were p, then q) and (if it were p, then it might be that not q): p is not supposable in the context s$C'$sthat results from updating C with this strong assertion, since s$\mathcal {S}( p\land C') \vdash \mathcal {S}(p\land \Diamond \neg p)$s, which entails s$\bot $s.sOne may wonder how this pragmatic explanation of the infelicity of generalized Yalcin sentences can account for their infelicity when embedded under suppose. After all, the special problem raised by Yalcin sentences is that, unlike Moore sentences, they continue to sound bad under suppose and similar environments. This, the usual story goes, precludes a pragmatic explanation of their infelicity similar to the one given for Moore sentences, since pragmatic inferences are suspended under suppose.sThis story overplays the power of suppose to suspend pragmatic inferences: although the pragmatic inferences used to explain the infelicity of Moore sentences are suspended under suppose, it does not follow that all such inferences are. The pragmatic inference used to explain the infelicity of Moore sentences is suspended under supposition because while it seems a preparatory condition for strong assertion that one ought to believe what one strongly asserts, there is no analogue preparatory condition for supposition. By contrast, the pragmatic inference we outlined in the previous paragraphs clearly goes through under supposition. In particular, just as the actual update of the common ground with a conditional is only well-defined if its antecedent is supposable in the right context C, so is the temporary update with the same conditional only well-defined if its antecedent is supposable in C. That is, this supposability requirement—unlike the requirement that one ought to believe the content of the speech act—is shared by strong assertions and suppositions of conditionals alike.sThere are some further generalized Yalcin sentences that we can explain using this suppositional strategy. Matthew Mandelkern (2019) observed that sentences like (6a) and (6b) seem to sound as bad as the original Yalcin sentences. s(6)s#a. (p and it might not be that p) or (q and it might not be that q)s#b. might (p and it might not be that p)sHowever, on our account such sentences are not absurd: there are models of s$\mathsf {EML}$sin which (7a) and (7b) hold and hence neither sentence entails s$\bot $s(on the assumption that neither A nor B are classical contradictions). s(7)sa. +((A s$\land \Diamond \neg $sA) s$\lor $s(Bs$\land \Diamond \neg $sB)).sb. +s$\Diamond $s(A s$\land \Diamond \neg $sA).sSimilarly to Santorio’s cases, Mandelkern’s cases can only be accounted for semantically by making substantial revisions to classical logic. Any attempt to add further rules to s$\mathsf {EML}$sso that (7a) and (7b) entail s$\bot $swould trivialize the epistemic possibility modal.sTheorem 5.1.sSuppose that one of the following is the case. s(a)s$+\hspace {-0.1em} ((A\land \Diamond \neg A)\lor (B\land \Diamond \neg B)) \vdash \bot $sfor all s$A, B$s.s(b)s$+\hspace {-0.1em}\Diamond (A\land \Diamond \neg A) \vdash \bot $sfor all A.sThen for any A, s$+\hspace {-0.1em} \Diamond A \vdash +\hspace {-0.1em} A$s.sProof.sSuppose that (a) is the case, letting A be p and B be s$\neg p$s. That is, we have that s$+\hspace {-0.1em} ((p \land \Diamond \neg p) \lor (\neg p \land \Diamond p)) \vdash \bot $s. By Smileian reductio, this means that s$\vdash \ominus \hspace {-0.08em} ((p \land \Diamond \neg p) \lor (\neg p \land \Diamond p))$s, which by (s$\oplus \hspace {-0.08em}\neg $sI.) means that s$\vdash \oplus \hspace {-0.08em} \neg ((p \land \Diamond \neg p) \lor (\neg p \land \Diamond p))$s.sBy De Morgan, s$\neg ((p \land \Diamond \neg p) \lor (\neg p \land \Diamond p))$scan be written as s$(\neg p \lor \neg \Diamond \neg p) \land (p \lor \neg \Diamond p)$s, which is classically equivalent to s$\neg \Diamond \neg p \lor \neg \Diamond p$s. This sentence can be rewritten as s$\Diamond p \rightarrow \Box p$s. Hence, (a) implies that s$\vdash \oplus \hspace {-0.08em} (\Diamond p \rightarrow \Box p)$s. But then, we can derive s$+\hspace {-0.1em} p$sfrom s$+\hspace {-0.1em}\Diamond p$sas follows.sFor the second part, suppose that (b) is the case, letting A be s$\neg p$s. That is, we have that s$+\hspace {-0.1em} \Diamond (\neg p \land \Diamond p) \vdash \bot $s. By Smileian reductio, this means that s$\vdash \ominus \hspace {-0.08em} \Diamond (\neg p \land \Diamond p)$s. By (s$\oplus \hspace {-0.08em}\neg $sI.), this is equivalent to s$\vdash \oplus \hspace {-0.08em}\neg \Diamond (\neg p \land \Diamond p)$s. By Lemma 3.7, it follows that s$\vdash +\hspace {-0.1em} \neg (\neg p \land \neg \Diamond p)$s, which by Classicality is equivalent to s$\vdash +\hspace {-0.1em} \Diamond p\rightarrow p$s.□sOne might take issue with Lemma 3.7 here, but inspection of the proof of the lemma shows that it rests on well-motivated assumptions. Challenging the application of (Weak Inference) in the proof of (a) would not help, since the proof of (b) does not require this principle. Thus, we cannot semantically account for these generalized cases without giving up on classical logic. Mandelkern’s (2019) account avoids these results by denying the universal validity of the relevant classically valid transformations (e.g. the application of De Morgan in the above proof).sInstead of making deep revisions to our semantics, we again provide a pragmatic explanation of the relevant data. We explain the infelicitousness of (6a) by taking the strong assertion of A or B to presuppose the supposability of A and the supposability of B. And we explain the infelicitousness of (6b) by taking the strong assertion of might A to pragmatically presuppose the supposability of A. The pragmatic inferences from A or B and might A to supposability are evinced by the felicitousness of sequences like the following. s(8)sa. It is either p or q. So suppose that it in fact the case that p. b. It might be that p. So suppose that it in fact the case that p.sAt this point, one might suggest taking the inferences from A or B and might A to supposability to be not pragmatic, but semantic. In particular, one might identify the meaning of might with supposability and the meaning of A or B with might A and might B.s9sHowever, supposition can be counterfactual, so having asserted not A one may go on to suppose that A, possibly just for the sake of argument. By contrast, having asserted not A, it is a mistake to also assert might A. Thus might cannot be identified with supposability.sPragmatically explaining the data in (6) has also an empirical advantage over Mandelkern’s (2019) own semantic approach. He claims that the infelicitousness of (6a) is explained by the fact that Yalcin sentences are classical contradictions and that disjunctions of classical contradictions are themselves classical contradictions. But now consider (9). s(9)s(p and it might not be that p) or q.sIf q is true, then according to the usual truth-functional meaning of disjunction (which Mandelkern does not dispute), (9) is true, since it has a true disjunct. However, (9) sounds odd. Thus Mandelkern would seem to need some further mechanism to explain the oddity of (9), e.g. our pragmatic presupposition or another principle entailing that disjunctions one of whose disjuncts is a classical contradiction sound generally bad. But then, any such mechanism would also account for the infelicitousness of (6). Hence, the more parsimonious approach is to stick with classical consequence and explain both (6) and (9) pragmatically, as we have done.s6sConsequence, credence and commitmentsMoritz Schulz (2010) presented an objection to informational consequence (and hence s$\mathsf {EML}$s’s notion of consequence) as an account of logical consequence. Schulz argues that, in situations of uncertainty, it may be rational to assign a high credence to A but a low credence to it must be the case that A. This is because one’s evidence for A need not rule out not A and hence need not be evidence for must A. Consider, for instance, a situation in which one sees that the lights are on. On the basis of this evidence, one might (rationally) assign a high credence to (10a). However, says Schulz, one cannot rule out that they forgot to switch off the lights. Hence one must assign a low credence to (10b). s(10)sa. They are at home.sb. They must be at home.sHowever, in s$\mathsf {EML}$s, s$+\hspace {-0.1em}\Box A$sis derivable from s$+\hspace {-0.1em} A$sand hence (10b) follows from (10a), at least assuming the obvious formalization of these sentences. Thus, Schulz continues, s$\mathsf {EML}$s-consequence, like informational consequencesclearly violates a reasonable constraint on logical consequence: If a rational and logically omniscient subject’s credence function P is such that s$P (\phi ) = t$s, and s$\phi \models \psi $s, then s$P(\psi ) \geq t$s, i.e. if we assign to a statement s$\phi $ssubjective probability t, and we are certain that s$\psi $sfollows logically from s$\phi $s, then we should assign to s$\psi $sa subjective probability at least as high as t. After all, we know that the former cannot hold without the latter. (Schulz, 2010, p. 388)sSchulz concludes that epistemic strengthening (the inference from A to must A) is invalid.sMore recently, Justin Bledin and Tamar Lando (2018) have considered cases similar to (10). One goes as follows. It is the run-up to the 1980 US elections and, according to the polls, Reagan will win by a landslide, Carter will come second and Anderson will be third by a wide margin. On the basis of this evidence, you come to believe (11a). But given that you cannot rule out that Carter will win, it would seem wrong for you to believe (11b) on the basis of the same evidence. s(11)sa. Carter will not win the electionsb. It is not the case that Carter might win the electionsHowever, in s$\mathsf {EML}$s, s$+\hspace {-0.1em} \neg \Diamond A$sis derivable from s$+\hspace {-0.1em} \neg A$s—an inference step also known as Łukasiewicz’s principle. Unlike Schulz, however, Bledin and Lando do not conclude that the relevant inference should be regarded as invalid. Rather, they say that philosophers face a choice between rejecting Łukasiewicz’s principle, rejecting Justification with Risk—i.e. the claim that there are cases in which one can justifiably believe s$\neg A$sbut not s$\neg \Diamond A$s—and rejecting Single-Premiss Closure—i.e. the principle that if one is justified in believing s$\phi $sand one comes to believe that s$\psi $sby competently deducing s$\psi $sfrom s$\phi $s, then one is justified in believing s$\psi $s(a nonprobabilistic variant of Schulz’s constraint).sBledin and Lendo present some arguments to the effect that Łukasiewicz’s principle and Justification with Risk hold. A philosopher persuaded by those arguments, they conclude, needs to give up Single-Premise Closure. However, Bledin and Lando do not offer any positive reason for thinking that Single Premise Closure does not hold.sThere is a familiar strategy for dealing with this sort of cases. According to this strategy, we should give up Justification with Risk (or its equivalent for epistemic strengthening) and explain away the troublesome cases by appealing to the idea that, when the possibility of error is made salient, this brings about a change of context (DeRose, 1991) or of the standards of precision in play (Moss, 2019). Thus, in the Schulz case (10), one should assign a high credence to they are at home because one sees that the lights are on. When the possibility that they forgot to switch off the lights is raised, one should not assign a high credence to they must be at home. But then, in those circumstances, one should not assign a high credence to they are at home either.sHowever, under Schulz’s assumptions that epistemic modals can be assigned probabilities and that probabilities obey the classical probability calculus, one can show that this response is inadequate. In particular, it follows from these assumptions that Schulz’s constraint on logical consequence is incompatible with any logic of epistemic modality which treats not p and it might be that p as a contradiction. For if s$\neg p \land \Diamond p \models \bot $s, then according to Schulz’s constraint, s$P(\neg p \land \Diamond p) \leq P(\bot ) = 0$s. Since probabilities are positive, s$P(\neg p \land \Diamond p) = 0$s. Thus, if one believes that it might be that p, i.e. s$P(\Diamond p) = 1$s, it follows that s$P(\neg p) = 0$s, i.e. s$P(p) = 1$sby the law of total probability.s10sSchulz himself (2010, § 3) favors a pragmatic explanation of Yalcin sentences. According to this explanation, when you strongly assert A, you rule out all not-A worlds. Thus, in it is raining and it might not be raining, the domain of the epistemic modal in the second conjunct is restricted to rain-worlds. So there is no way, once you have strongly asserted that it is raining, that you can also strongly assert that it might not be. However, this explanation predicts that it might not be raining and it is raining should be felicitous, which does not appear to be the case.sSo what are the options for the defender of informational consequence, who wants to insist that Yalcin sentences are, indeed, contradictions? She could try to develop a nonclassical probability calculus (see Williams, 2016 for a survey). Or she could reject Schulz’s constraint on logical consequences and hence, arguably, also Single-Premiss Closure. We explore the former option in ongoing work. Here we exploit features of s$\mathsf {EML}$sand our proof-theoretic approach to show that the prospects for the latter option are brighter than they might appear at first sight.sBledin & Lando (2018, p. 19) argue that, even if they reject the principle of Single-Premiss Closure, defenders of informational consequence can still accept the principle restricted to nonmodal formulae. However, there are inferences involving epistemic modals that are compatible with Schulz’s constraint and hence satisfy Single-Premiss Closure. A case in point is epistemic weakening (that is, the inference from must A to A). Epistemic weakening satisfies Schulz’s constraint: the credence rationally assigned to A can never be lower than the credence assigned to must A. For, if Schulz is correct, one’s evidence for must A is evidence that rules out not A, and evidence that rules out not A is also evidence for A. Thus, the portion of informational consequence that respects Schulz’s constraint is not exhausted by its nonmodal fragment.sOur proof-theoretic account allows us to isolate an evidence-preserving fragment of informational consequence that validates more inferences than merely the nonmodal ones. This is the fragment s$\vdash ^*$sof s$\mathsf {EML}$sconsequence that one obtains by removing the s$\Diamond $s-Elimination rules from s$\mathsf {EML}$s. By inspecting the proofs of epistemic strengthening and epistemic weakening, one finds that the former is not derivable in s$\vdash ^*$s, whereas the latter is. The rules for s$\Diamond $s-Elimination are also required to derive that s$+\hspace {-0.1em} (p\land \Diamond \neg p) \vdash \bot $s, which we demonstrated to be incompatible with Schulz’s constraint as well (again, assuming the classical probability calculus).sAs we note in Incurvati & Schlöder (2019, pp. 760–762), the use of s$\Diamond $s-Elimination means that there may be a loss of specificity when going from the premisses of an inference to the conclusion. That is to say, we can go from specific premisses to an unspecific conclusion, which may result in a loss of evidence. This is the reason why we restricted the subderivation in (Weak Inference) to cases in which the s$\Diamond $s-Elimination rules are not applied. This suggests that while s$\mathsf {EML}$sdoes not preserve evidence, the fragment s$\vdash ^*$sdoes.sBut if inference in s$\mathsf {EML}$sdoes not preserve evidence, why think that this is a suitable notion of inference at all? We contend that even if inference in s$\mathsf {EML}$sdoes not preserve evidence, it does preserve commitment. As mentioned in §2, a derivation in s$\mathsf {EML}$scomputes which attitudes someone is committed to in virtue of their having the attitudes in the premisses. In speech act terms, a derivation in s$\mathsf {EML}$scomputes what someone is committed to accepting in the common ground given the public stances that they have taken on the acceptability of certain propositions into the common ground. This is compatible with consequence in s$\mathsf {EML}$snot preserving evidence and hence with the failure of Singe-Premise Closure.s7sConclusion and further worksWe have described a general framework for the proof-theoretic study of epistemic modality. We have restricted our presentation to the interaction of epistemic modality with the classical Boolean connectives. In ongoing work, we apply the strategy of placing principled proof-theoretic constraints on the rules governing epistemic modal operators to problems going beyond these connectives.sSeveral authors have noted that principles like classical reductio fail in the presence of epistemic vocabulary (e.g. Bledin, 2014): it might not be raining and it is raining sound contradictory, but it is mistaken to apply classical reductio to derive it is not raining from it might not be raining. In s$\mathsf {EML}$s, reductio is only applicable when certain proof-theoretic restrictions are met, which gives one the tools to account for these problems. But the problems of epistemic modality are not confined to propositional logic. In §5, we outlined some possible extensions of s$\mathsf {EML}$sto the study of indicative conditionals and the concept of supposition. In addition, there are well-known puzzles regarding the interaction of quantifiers and epistemic modals (Aloni, 2001, 2005), recently brought to renewed attention by Ninan (2018). Naïve applications of first-order logic to epistemic modality license defective inferences like every card might be a losing card; therefore, the winning card might be a losing card. EML opens up the strategy of blocking such inferences via proof-theoretic restrictions on the use of epistemic modals under quantification.s http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The Review of Symbolic Logic Cambridge University Press

EPISTEMIC MULTILATERAL LOGIC

Loading next page...
 
/lp/cambridge-university-press/epistemic-multilateral-logic-sP0HhaZtGN
Publisher
Cambridge University Press
Copyright
© The Author(s), 2020. Published by Cambridge University Press on behalf of The Association for Symbolic Logic
ISSN
1755-0211
eISSN
1755-0203
DOI
10.1017/S1755020320000313
Publisher site
See Article on Publisher Site

Abstract

s1sIntroductionsModal expressions such as might and must can be used epistemically, for instance when one says that the Twin Prime Conjecture might be true and it might be false. When used in this way, they are known as epistemic modals and have been widely discussed in the recent literature, both in formal semantics and in philosophy. Challenges have been issued to the classic contextualist approach (Kratzer, 1977, 2012; DeRose, 1991) which aim to motivate relativist (MacFarlane, 2014), dynamic (Veltman, 1996; Willer, 2013), probabilistic (Swanson, 2006; Moss, 2015) and expressivist (Yalcin, 2007; Charlow, 2015) accounts.sDespite the recent flurry of interest, however, no general logical framework is available for reasoning involving epistemic modality. In this paper, we present such a framework. Technically, the framework is obtained by extending bilateral systems—that is, systems in which formulae are decorated with signs for assertion and rejection (Rumfitt, 2000)—with a marker for the speech act of weak assertion (Incurvati & Schlöder, 2019). Philosophically, the framework is developed from an inferentialist perspective. It respects well-known constraints on the acceptability of inference rules and hence provides the basis for an account of epistemic modality according to which the meanings of epistemic modals is given by their inferential role.sAlthough the logical framework is motivated from an inferentialist standpoint, it has the resources to account for several phenomena surrounding epistemic modality that have featured prominently in the recent literature. We focus here on Yalcin sentences (Yalcin, 2007), i.e. sentences like It is raining and it might not be raining, and related phenomena. We show that, when supplemented with an inferentialist account of supposition, the logical framework predicts that Yalcin sentences are infelicitous and remain so under supposition. By appealing to the notion of supposability, our explanation may be extended to generalizations of Yalcin sentences due to Santorio (2017) and Mandelkern (2019).s1sWe begin by making a case for developing a logical framework for reasoning about epistemic modality using the tools of proof-theoretic semantics (§2). We then present the logical framework, which we dub epistemic multilateral logic (§3). We give a model theory for epistemic multilateral logic and prove that the logic is sound and complete with respect to this model theory (§4). We apply our framework, suitably extended with an account of supposition, to Yalcin sentences and generalized Yalcin sentences (§5). One remarkable feature of epistemic multilateral logic is that it extends classical logic, unlike several current approaches to epistemic modality. Indeed, epistemic multilateral logic extends the modal logic s$\mathbf {S5}$s. Issues for systems dealing with epistemic modality that extend s$\mathbf {S5}$shave been recently discussed by Schulz (2010) and Bledin & Lando (2018). We argue that these issues can be dealt with by distinguishing between two notions of proof-theoretic consequence in epistemic multilateral logic (§6). We conclude by outlining some directions for future work (§7).s2sInferentialism, bilateralism and multilateralismsReasoning involving epistemic modality is widespread. For instance, suppose that one believes that Jane must be at the party. Then, it seems, one can conclude that Jane is at the party (von Fintel & Gillies, 2010). This is an instance of an inference pattern one may call epistemic weakening. But another commonly recognized inference allows us to conclude Jane must be at the party from Jane is at the party. It has been argued (see, e.g., Schulz, 2010) that this latter inference, an instance of epistemic strengthening, is importantly different in its logical status from epistemic weakening. Our proof theory will enable us to trace the difference precisely. In particular, it will allow us to distinguish between derivations that clearly preserve evidence (such as those formalizing epistemic weakening) and those for which this is more controversial but can nonetheless be taken to preserve commitment (such as those formalizing epistemic strengthening). This is an advantage of a proof-theoretic approach: by inspecting the rules featuring in a given derivation, one can determine which style of reasoning it employs and hence the epistemic properties it preserves. Current accounts of epistemic modality are presented in a model-theoretic framework, and a proof theory for epistemic modality is at present not available.sWe aim to repair the situation. We will develop a proof theory for epistemic modality from an inferentialist perspective. Inferentialism is the view that the meaning of an expression is given by its role in inferences. In the case of a logical expression, it has often been contended that its inferential role can be captured by its introduction and elimination rules in a natural deduction system. Thus, logical inferentialism is the view that the meaning of logical constants is given by such rules.sLogical inferentialism faces the problem that not every pair of introduction and elimination rules seems to confer a coherent meaning on the logical constant involved. Arthur Prior (1960) first raised the problem by exhibiting the connective s$\mathsf {tonk}$s.sAdding s$\mathsf {tonk}$sto a logical system makes it trivial: any sentence follows from any sentence whatsoever. Prior concluded that this sinks logical inferentialism. Inferentialists reacted by formulating criteria for the admissibility of inference rules which would rule out problematic constants such as s$\mathsf {tonk}$s. One prominent such criterion is harmony, the requirement that, for any given constant, there should be a certain balance between its introduction and elimination rules. We will present a natural deduction system for epistemic modality which complies with the harmony constraint and other standard proof-theoretic constraints.sIn many cases, for instance when dealing with modal vocabulary, it is not straightforward to satisfy these constraints, witness the search for suitable natural deduction rules for the modal logics s$\mathbf {S4}$sand s$\mathbf {S5}$s(see, e.g., Poggiolesi & Restall, 2012; Read, 2015). Rather than being a hindrance, however, proof-theoretic constraints serve to narrow down the range of options available when developing a system for epistemic modality. This represents a further advantage of the proof-theoretic approach over the model-theoretic one, which is instead presented with several competing candidates, all of which appear plausible given the linguistic data.sThe situation here should be familiar from the debate over the underdetermination of theory by data in the philosophy of science. Model-theoretic semantics is typically pursued in a bottom-up fashion by surveying a wide range of data and attempting to define truth-conditions as appropriate generalizations that account for these data. Without any further constraints, it is wildly underdetermined what these truth-conditions should be. Proof-theoretic semantics, by contrast, proceeds in a top-down manner by developing a theory which satisfies a number of theoretical constraints (such as harmony) and testing whether the theory matches the data. We adopt the methodology of proof-theoretic semantics here. In §3, we develop an account of epistemic modality which satisfies the proof-theoretic constraints but is otherwise motivated only by simple data involving epistemic modals. We test it against more involved data in §5 and §6.sBesides seemingly ruling out s$\mathsf {tonk}$s, however, proof-theoretic constraints appear to sanction an intuitionistic logic, since the rules for classical negation in standard natural deduction systems do not seem to be harmonious.s2sHowever, this is so because standard natural deduction systems only deal with asserted content: so-called bilateral systems—systems in which rejected content is also countenanced—are classical and satisfy the harmony constraint. Taking the rules of these systems to be meaning-determining leads to bilateralism, the view that the meaning of the logical constants is given by conditions on assertion and rejection. Here, assertion and rejection are speech acts expressing attitudes: assertion expresses assent, rejection expresses dissent. Importantly, rejection is, contra Frege (1919), distinct from, and not reducible to, the assertion of a negation.sNonetheless, in standard bilateral systems (Smiley, 1996; Rumfitt, 2000), rejection and negative assertion have the same inference potential. That is, one can pass from the rejection of a proposition to its negative assertion, and vice versa. This is clearly encapsulated by the rules for negation of standard bilateral systems (see, e.g., Rumfitt, 2000), using s$+\hspace {-0.1em}$sand s$\ominus \hspace {-0.08em}$sas, respectively, markers for assertion and rejection.sAlthough these rules are harmonious, they appear not to match certain important linguistic data about rejection. For the presence of these rules means that in standard bilateral systems rejection is strong: from the rejection of p it is always possible to infer the assertion of not p. However, linguistic evidence suggests that this is not always possible: rejections can be weak. Consider, for instance, the following dialogue, based on Grice (1991): s(1)sAlice: X or Y will win the election.sBob: No, X or Y or Z will win.sBob is here rejecting what Alice said: he is expressing dissent from X or Y will win the election. But Bob’s rejection is weak. It would be mistaken to infer that he is assenting to neither X nor Y will win the election. His utterance leaves open the possibility that X will win the election or that Y will.sIn Incurvati & Schlöder (2017) we develop a bilateral logic in which rejection is weak. In this logic, however, the rules for negation are, on the face of it, not harmonious. The issue can be addressed by extending the bilateralist approach to a multilateralist one. In Incurvati & Schlöder (2019) we present evidence for the existence of a speech act of weak assertion, linguistically realized using perhaps in otherwise assertoric contexts. Thus, in uttering (2a) in standard contexts one performs the familiar assertion (henceforth strong assertion) of it is raining. By contrast, an utterance of (2b) serves to realize the weak assertion of it is raining. s(2)sa. It is raining.sb. Perhaps it is raining.sStrong assertion, rejection and weak assertion can be embedded within a Stalnakerian model of conversation. Stalnaker takes the essential effect of strongly asserting p to be that of proposing the addition of p to the common ground. In Incurvati & Schlöder (2017) we argue that the essential effect of rejecting p is to prevent the addition of p to the common ground (which is not the same as proposing to add not p to it). And in Incurvati & Schlöder (2019) we argue that the essential effect of weakly asserting p is to prevent the addition of not p to the common ground.sIn the next section, we present a multilateral system involving weak assertion, strong assertion and (weak) rejection. Before doing so, however, we should clarify how consequence is best understood within a multilateral setting. Consider, for instance, the inference from the strong assertion of not A to the rejection of A. The validity of this inference does not mean that anyone strongly asserting not A is also explicitly rejecting A (see Dutilh Novaes, 2015 for a similar point). Nor does it mean that anyone assenting to not A is also in the cognitive state of dissenting from A: since arbitrarily many further attitudes follow from assent to any given proposition (e.g. by disjunction introduction), this notion of consequence would imply that anyone who expresses a single attitude towards a single proposition must have unboundedly many attitudes to unboundedly many propositions. This is implausible (Harman, 1986; Restall, 2005).sProperly understood, the multilateralist notion of consequence is social. The proof rules determine which attitudes one is committed to have (see also Searle & Vanderveken, 1985; Dutilh Novaes, 2015; Incurvati & Schlöder, 2017). Someone who explicitly assents to not A need not also hold the attitude of dissent towards A, since she may fail to draw the inference licensed by (s$+\hspace {-0.1em}\neg $sE.). Nevertheless, we may say that she is committed to dissenting from A, since if the inference is pointed out to her, she must dissent from A or admit to a mistake.s3sEpistemic multilateral logicsWe are now ready to present epistemic multilateral logic (s$\mathsf {EML}$s), a multilateral system for reasoning about epistemic modality. As in standard bilateral systems, in s$\mathsf {EML}$sformulae are signed. More specifically, the language s$\mathcal {L}_{EML}$sof s$\mathsf {EML}$sis characterized as follows. We have a countable infinity of propositional atoms s$p_1, \ldots , p_n$s. We then say that A is a sentence of s$\mathcal {L}_{EML}$sif it belongs to the smallest class containing the propositional atoms and closed under applications of the unary connective s$\neg $s, the binary connective s$\wedge $sand the operator s$\Diamond $s. We define s$A \rightarrow B$sin the usual classical way, that is as s$\neg (A \wedge \neg B)$s. Moreover, we define s$\Box A$sas s$\neg \Diamond \neg A$s. Finally, we say that s$\varphi $sis a signed formula if it is obtained by prefixing a sentence of s$\mathcal {L}_{EML}$swith one of s$+\hspace {-0.1em}$s, s$\ominus \hspace {-0.08em}$sand s$\oplus \hspace {-0.08em}$s. These are force-markers and stand, respectively, for strong assertion, (weak) rejection and weak assertion. s$\mathcal {L}_{EML}$salso includes a symbol s$\perp $s, but this will be considered neither a sentence nor a (0-place) connective, as is sometimes the case, but a punctuation mark indicating that a logical dead end has been reached (see Tennant, 1999; Rumfitt, 2000). In general, we use lowercase Greek letters for signed formulae and uppercase Latin letters for unsigned sentences.s3.1sProof theorysThe proof theory of s$\mathsf {EML}$sis formulated by means of natural deduction rules. We briefly discuss the rules as described in Incurvati & Schlöder (2019).sFor conjunction, we simply take the standard rules and prefix each sentence with the strong assertion sign.sThe rules say that from two strongly asserted propositions one can infer the strong assertion of their conjunction, and that from a strongly asserted conjunction one can infer the strong assertion of either conjunct.sLinguistic analysis reveals that weakly asserting a proposition has the same consequences as rejecting its negation (see Incurvati & Schlöder, 2019). Hence, the weak assertion of A and the rejection of not A should be interderivable. Linguistic analysis also shows that rejecting a proposition has the same consequences as weakly asserting its negation. Thus, the rejection of A and the weak assertion of not A should be interderivable too. This licenses the following rules for negation.sThe rules for the epistemic possibility operator are based on two observations. The first is that uttering perhaps A has the same consequences as uttering it might be that A. It follows that the weak assertion of A and the strong assertion of s$\Diamond A$sshould be interderivable.sThe second observation is that uttering perhaps A has the same consequences as uttering perhaps it might be that A. It follows that the weak assertion of A and the weak assertion of s$\Diamond A$sshould be interderivable too.sThe rules for s$\neg $sand s$\Diamond $sare clearly harmonious, since the elimination rules are the inverses of the corresponding introduction rules. They are also simple and pure in Dummett’s (1991, p. 257) sense: only one logical constant features in them and this constant occurs only once.sWe have presented the operational rules of s$\mathsf {EML}$s—that is, rules for the introduction and elimination of its operators. But we are not quite finished yet. For in multilateral systems (just as in bilateral systems) we also need rules that specify how the speech-act markers interact. Such rules are known as coordination principles and are needed to validate mixed inferences—inferences involving propositions that are uttered with different forces and are therefore prefixed by different signs. One example, involving strong assertion and rejection, is the seemingly valid pattern of inference that allows one to conclude the rejection of p from the strong assertion of if p, then q and the rejection of q (see Smiley, 1996). Another example, involving weak and strong assertion, is the inference pattern of weak modus ponens, which allows one to conclude the weak assertion of q from the weak assertion of p and the strong assertion of if p, then q (see Incurvati & Schlöder, 2019). That is, from if p, then q and perhaps p one may infer perhaps q.sAs in standard bilateral systems, the rules coordinating strong assertion and rejection characterize these speech acts as contraries.s(Rejection) states that strong assertion and rejection are incompatible: it is absurd to both propose and prevent the addition of the same proposition to the common ground. (SRs$_1$s) says that if, in the presence of one’s extant commitments, strongly asserting a proposition leads to absurdity, then one is already committed to preventing the addition of that proposition to the common ground. A similar reading of (SRs$_2$s) is available. The conjunction of (SRs$_1$s) and (SRs$_2$s) is known as the Smileian reductio principle, after Smiley (1996).sThe remaining coordination principles characterize weak assertion as subaltern to its strong counterpart. We write s$+\vdots$sfor a derivation in which all premisses and undischarged assumptions are strongly asserted sentences, i.e. formulae of the form s$+\hspace {-0.1em} A$s. Since s$\bot $sis treated as a punctuation mark, in (Weak Inference) we distinguish between the case in which one infers s$+\hspace {-0.1em} B$sin the subderivation and the case in which one infers s$\bot $s. In the former case, (Weak Inference) allows one to conclude s$+\hspace {-0.1em} B$s; in the latter case, it allows one to conclude s$\bot $s.s3s(Assertion) ensures that in performing the strong assertion of a proposition, one is committed to dissenting from its negation. (Weak Inference) ensures that weak assertion is closed under strongly asserted implication and hence that inferences like if p then q; perhaps p; therefore perhaps q are valid. For if one’s evidential situation sanctions perhaps p and one knows that any situation where p is the case is also a situation where q is the case, then one is entitled to conclude perhaps q.sThe restrictions on the subderivation of (Weak Inference) are due to the specificity problem (Incurvati & Schlöder, 2017). As noted by Imogen Dickie (2010), a correct strong assertion of A requires there being evidence for A. By contrast, the speech act of weak rejection is ‘messy’ in that it makes unspecific demands about evidence: a weak rejection of A may be correct because there is evidence against A, but may also be correct because of the (mere) absence of evidence for A (Incurvati & Schlöder, 2017). This means the following for the justification of (Weak Inference). Suppose that in the base context one’s evidential status sanctions perhaps A. To apply (Weak Inference), one then considers the hypothetical situation in which A is the case and attempts to derive B. In the hypothetical situation in which A is the case, there may be evidence that is not available in the base context (e.g. evidence for A). Thus, in the subderivation of (Weak Inference), one may not use rejected premisses from the base context that one rejects for lack of evidence. Due to the unspecificity of weak rejection, this means that one may not use any premisses that are weakly rejected. Since s$\oplus $sswitches with s$\ominus $sand s$\oplus $sswitches with s$+\hspace {-0.1em}\Diamond $s, one may also not use any weakly asserted premisses or apply s$\Diamond $s-Eliminations to strongly asserted premisses.s4s3.2sDerived rulessThis concludes the exposition of the proof theory of s$\mathsf {EML}$s, and we use s$\vdash $sto denote derivability in s$\mathsf {EML}$s. To simplify the presentation in the remainder of the paper, however, it will be useful to present some additional derived rules of s$\mathsf {EML}$s, characterizing the behavior of the primitive connectives under some of the speech acts not covered by the basic rules and the behavior of the defined connectives. This will also serve to give a flavour of how derivations in s$\mathsf {EML}$swork.sWe begin with a rule which specifies how to introduce the strong assertion of a negation.sProposition 3.1.sThe following rule is derivable in s$\mathsf {EML}$s.sProof.s$\mathsf {EML}$sderives (s$+\hspace {-0.1em}\neg $sI.) as follows.s□sNext, we have rules specifying the behavior of conjunction under weak assertion.s5sProposition 3.2.sThe following rules are derivable in s$\mathsf {EML}$s.sProof.s$\mathsf {EML}$sderives (s$\oplus \hspace {-0.08em}\land $sI.s$_1$s) as follows.sThe case of (s$\oplus \hspace {-0.08em}\land $sI.s$_2$s) is analogous. (s$\oplus \hspace {-0.08em}\land $sE.s$_1$s) follows from (Weak Inference):s(s$\oplus \hspace {-0.08em}\land $sE.s$_2$s) is analogous. □sNote that these derived rules are applicable in arbitrary proof contexts, even when their premisses are derived using s$\Diamond $s-Elimination rules. While the derivations make use of (s$+\hspace {-0.1em}\neg $sI.) and (Weak Inference)—rules that disallow s$\Diamond $s-Eliminations—the applications of (s$+\hspace {-0.1em}\neg $sI.) and (Weak Inference) occur within self-contained subderivations and hence are correct irrespective of the wider proof context.s3.2.1sThe material conditionalsAs mentioned, we are taking the conditional s$A \rightarrow B$sto be defined as s$\neg (A \wedge \neg B)$s. Given the classicality of our calculus (to be shown below), this means that the conditional will be material. The following rules for strongly asserted conditionals can be derived.sProof.s(s$+\hspace {-0.1em}\rightarrow $sI.) is derivable as follows.sAnd (s$+\hspace {-0.1em}\rightarrow $sE.) is derivable as follows:s□sIn addition, one can derive the rule of weak modus ponens (WMP).sProof.s□sBy (s$+\hspace {-0.1em}\Diamond $sI.) and (s$+\hspace {-0.1em}\Diamond $sE.), (WMP) entails the derivability of epistemic modus ponens (EMP).s3.2.2sNecessity modals$\Box A$sis defined as s$\neg \Diamond \neg A$s. The following are derived rules for s$\Box $s:sProof.s(s$+\hspace {-0.1em}\Box $sI.) is derivable as follows:s(s$+\hspace {-0.1em}\Box $sE.) is derivable as follows:s□sNote that the derivability of (s$+\hspace {-0.1em}\Box $sI.) does not imply that s$+\hspace {-0.1em} (A\rightarrow \Box A)$s, since the derivation of s$+\hspace {-0.1em} \neg \Diamond \neg A$sfrom s$+\hspace {-0.1em} A$suses a s$\Diamond $s-Elimination rule, which rules out an application of (s$+\hspace {-0.1em}\rightarrow $sI.). In fact, there is no derivation of s$+\hspace {-0.1em} (A\rightarrow \Box A)$sin s$\mathsf {EML}$s, as shown by the soundness of s$\mathsf {EML}$swith respect to s$\mathbf {S5}$smodulo an appropriate translation (see §4).s3.3sClassicalitysWe now show that, in a defined sense, the logic of strong assertion extends classical logic (in much the same way that normal modal logics extend classical logic).sLet s$\sigma : \text {At} \rightarrow \text {wff}_{\text {EML}}$sbe any mapping from propositional atoms to formulae in the language s$\mathcal {L}_{EML}$sof s$\mathsf {EML}$s. If A is a formula in the language s$\mathcal {L}_{PL}$sof propositional logic, write s$\sigma [A]$sfor the s$\mathcal {L}_{EML}$s-formula that results from replacing every atom p in A with s$\sigma (p)$s. Moreover, let s$\models ^{\text {CPL}}$sbe the consequence relation of classical propositional logic. We have:sTheorem 3.3s(Supra-Classicality).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{PL}$s-formulae and A an s$\mathcal {L}_{PL}$s-formula. If s$\Gamma \models ^{\text {CPL}} A$s, then s$\{+\hspace {-0.1em} \sigma [B] \mid B \in \Gamma \} \vdash +\hspace {-0.1em}\sigma [A]$s.sProof.sSince s$\mathsf {EML}$svalidates modus ponens, it suffices to show that the strongly asserted versions of the axioms of the propositional calculus are theorems of s$\mathsf {EML}$s. That is, for arbitrary A, B and C in s$\mathcal {L}_{EML}$s, we have: s$$ \begin{align*} \begin{array}{l} \vdash +\hspace{-0.1em}(A \rightarrow \neg\neg A),\\ \vdash +\hspace{-0.1em}(\neg\neg A\rightarrow A),\\ \vdash +\hspace{-0.1em}((A \rightarrow B) \rightarrow (\neg B \rightarrow \neg A)),\\ \vdash +\hspace{-0.1em}((A \rightarrow (B \rightarrow C)) \rightarrow ((A \rightarrow B) \rightarrow (A \rightarrow C))),\\ \vdash +\hspace{-0.1em}((A \rightarrow (B \rightarrow C)) \rightarrow (B \rightarrow (A \rightarrow C))),\\ \vdash +\hspace{-0.1em}(A \rightarrow (B \rightarrow A)). \end{array} \end{align*} $$sThese are easy to check (see Incurvati & Schlöder, 2019). □sTogether with the Soundness result (Theorem 4.1) we will prove in §4, this yields the following corollary.sCorollary 3.4s(Classicality).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{PL}$s-formulae and A an s$\mathcal {L}_{PL}$s-formula. s$\Gamma \models ^{\text {CPL}} A$siff s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash +\hspace {-0.1em} A$s.sProof.sThe left-to-right direction follows from Theorem 3.3. The right-to-left direction is a corollary of s$\mathbf {S5}$s-Soundness (Theorem 4.1): if s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash +\hspace {-0.1em} A$s, then s$\{\Box B \mid B \in \Gamma \} \models ^{\mathbf {S5}} \Box A$s, which for propositional s$\Gamma $sand A entails s$\Gamma \models ^{\text {CPL}} A$s, since all worlds in an s$\mathbf {S5}$s-model are models of classical propositional logic. □s3.4s$\mathsf {EML}$sextends s$\mathbf {S5}$sWe now show that s$\mathsf {EML}$sextends s$\mathbf {S5}$s, i.e. that the logic of strong assertion validates every s$\mathbf {S5}$s-valid argument. To this end, we shall prove that if A is an s$\mathbf {S5}$s-tautology, then its strongly asserted counterpart is a theorem of s$\mathsf {EML}$s. That is:sTheorem 3.5.sIf s$\models ^{\mathbf {S5}} A$s, then s$\vdash +\hspace {-0.1em} A$s.sThis will yield the desired result.sCorollary 3.6.sIf s$\Gamma \models ^{\mathbf {S5}} A$s, then s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash +\hspace {-0.1em} A$s.sProof.sIf s$\Gamma \models ^{\mathbf {S5}} A$s, then there is a finite s$\Gamma '$ssuch that s$\models ^{\mathbf {S5}} \bigwedge _{B\in \Gamma '} B \rightarrow A$s. By Theorem 3.5, s$\vdash +\hspace {-0.1em} (\bigwedge _{B\in \Gamma '} B \rightarrow A)$s. Since the (s$+\hspace {-0.1em}\rightarrow $sE.) rule is derivable in s$\mathsf {EML}$s, it follows that s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash +\hspace {-0.1em} A$s. □sTowards the proof of Theorem 3.5, we first prove a technical lemma.sLemma 3.7.sThe following rule is derivable in s$\mathsf {EML}$s.sProof.s□sWe are now ready to prove the main result of this section.sProofsof Theorem 3.5s$\mathsf {EML}$sproves all substitution-instances of classical tautologies (Theorem 3.3). Moreover, by the (s$+\hspace {-0.1em}\Box $sI.) rule, we have that if s$\vdash +\hspace {-0.1em} A$sthen also s$\vdash +\hspace {-0.1em}\Box A$s. Thus, it suffices to show that the s$\mathbf {KT5}$saxioms are derivable in s$\mathsf {EML}$s.sAxiom s$\mathbf {K}$scan be written on the signature s$\{\neg ,\land ,\Diamond \}$sas s$$ \begin{align*} (K)\ \neg (\neg \Diamond (A\land \neg B) \land \neg \Diamond \neg A \land \Diamond \neg B). \end{align*} $$sThe following derivation witnesses s$\vdash +\hspace {-0.1em} \neg (\neg \Diamond (A\land \neg B) \land \neg \Diamond \neg A \land \Diamond \neg B)$s:sAxiom s$\mathbf {T}$sfollows immediately from (s$+\hspace {-0.1em}\Box $sE.) and (s$+\hspace {-0.1em}\rightarrow $sI.), as the derivation of (s$+\hspace {-0.1em}\Box $sE.) does not involve s$\Diamond $s-Elimination rules.sAxiom s$\mathbf {5}$scan be written on the signature s$\{\neg ,\land ,\Diamond \}$sas s$\neg (\Diamond A \land \Diamond \neg \Diamond A)$s. The following derivation witnesses s$\vdash +\hspace {-0.1em} \neg (\Diamond A \land \Diamond \neg \Diamond A)$s:s□sThe derivations of Axioms K and 5 make use of s$\Diamond $s-Elimination rules, but we can apply these axioms within (Weak Inference) by assuming them and then discharging them. To wit, suppose that from s$+\hspace {-0.1em} A$swe can infer s$+\hspace {-0.1em} B$susing Axiom K. Then the following application of (Weak Inference) is, strictly speaking, incorrect.sBut we may still derive s$\oplus \hspace {-0.08em} B$sfrom s$\oplus \hspace {-0.08em} A$sas follows.sHere, the application of (Weak Inference) is licit since it uses s$+\hspace {-0.1em} (K)$sas a dischargeable assumption. If this application of (Weak Inference) occurs within another application of (Weak Inference), one can defer the derivation of s$+\hspace {-0.1em} (K)$sto the outermost proof level. Clearly, this method generalizes to uses of Axiom 5 within applications of (Weak Inference). So we will use the S5 axioms freely from now on. In general, if s$+\hspace {-0.1em} A$sis a theorem—if one can show it from no side premisses, even when using s$\Diamond $s-Eliminations—one may use s$+\hspace {-0.1em} A$seven in proof contexts that disallow s$\Diamond $s-Eliminations by assuming s$+\hspace {-0.1em} A$s, conditionalizing on A and deriving s$+\hspace {-0.1em} A$sat the outermost proof level.s4sModel theory for epistemic multilateral logicsWe have motivated epistemic multilateral logic from an inferentialist perspective and have therefore focused on the proof theory so far. We now proceed to provide a model theory for the logic. We do this by providing a translation of epistemic multilateral logic into modal logic and showing that the result is sound and complete with respect to s$\mathbf {S5}$s.s4.1sSoundnesssWe begin with the translation. The idea is to translate strong assertion with necessity, rejection with possible falsity, and weak assertion with possibility, but note that this is a translation only in the technical sense: it is not intended to provide the intended interpretation of the force-markers. Formally, we define a mapping s$\tau $sfrom s$\mathcal {L}_{EML}$s-formulae to formulae in the language s$\mathcal {L}_{ML}$sof modal logic. s$$ \begin{align*}\tau(\varphi) = \begin{cases} \Box\psi\text{, if }\varphi = +\hspace{-0.1em}\psi\\ \Diamond\neg\psi\text{, if }\varphi = \ominus\hspace{-0.08em}\psi\\ \Diamond\psi\text{, if }\varphi = \oplus\hspace{-0.08em}\psi \end{cases}\end{align*} $$sIf s$\Gamma $sis a set of s$\mathcal {L}_{EML}$s-formulae, we write s$\tau [\Gamma ]$sfor s$\{\tau (\varphi ): \varphi \in \Gamma \}$s.sWe now prove that, under this translation, s$\mathsf {EML}$sis sound with respect to s$\mathbf {S5}$s. That is:sTheorem 4.1s(Soundness).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{EML}$s-formulae. If s$\Gamma \vdash \varphi $sthen s$\tau [\Gamma ] \models ^{\mathbf {S5}} \tau (\varphi )$s.sThe main challenge is to show that the restrictions on (Weak Inference) are effective in ensuring the soundness of the calculus. To this end, let s$\mathsf {EML}^+$sbe the calculus of s$\mathsf {EML}$splus the derivable rules for conjunction under weak assertion, which we recall for convenience.sSince these are derivable in s$\mathsf {EML}$s, the soundness of s$\mathsf {EML}$sis equivalent to the soundness of s$\mathsf {EML}^+$s. We use s$\vdash ^+$sto denote derivability in s$\mathsf {EML}^+$sand write s$\Gamma \vdash ^+_D \varphi $sto indicate that the derivation D witnesses the existence of this derivability relation between s$\Gamma $sand s$\varphi $s.sNow let s$\mathsf {EML}^-$sbe the calculus of s$\mathsf {EML}^+$swithout (Weak Inference) and write s$\vdash ^-$sfor the resulting derivability relation. By inspecting the proofs of (s$+\hspace {-0.1em}\rightarrow $sE.) and (WMP), one can see that (s$+\hspace {-0.1em}\rightarrow $sE.) and (WMP) are derivable in s$\mathsf {EML}^-$s. (The conditional proof rule (s$+\hspace {-0.1em}\rightarrow $sI.) is not derivable in s$\mathsf {EML}^-$s, but this does not matter for present purposes.)sNow, it is straightforward to show that s$\mathsf {EML}^-$sis sound with respect to s$\mathbf {S5}$smodulo the translation s$\tau $s.sTheorem 4.2s(Pre-Soundness).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{EML}$s-formulae and s$\varphi $san s$\mathcal {L}_{EML}$s-formula. If s$\Gamma \vdash ^- \varphi $sthen s$\tau [\Gamma ] \models ^{\mathbf {S5}} \tau (\varphi )$s.sThe proof is a standard induction on the length of derivations and is therefore omitted. Next, we prove the soundness of the full calculus. First, we need an auxiliary definition and a technical lemma. The following definition provides the tools to rewrite a proof D not involving s$\Diamond $s-Eliminations to a proof where s$\Diamond $ss only occur in sentences that translate back to S5-tautologies.sDefinition 1.sSuppose s$\Gamma \vdash ^+_D \varphi $swhere D does not use s$\Diamond $s-Elimination rules, s$\Gamma $scontains only strongly asserted formulae and D uses all premisses in s$\Gamma $s(in particular, then, s$\Gamma $sis finite).sConstruct a mapping s$\pi ^D$sas follows: for each formula Z that occurs anywhere in D pick an unused (in D) propositional atom s$c_Z$sand let s$\pi ^{D}(+\hspace {-0.1em} Z) = +\hspace {-0.1em} c_Z$s, s$\pi ^{D}(\ominus \hspace {-0.08em} Z) = \ominus \hspace {-0.08em} c_Z$sand s$\pi ^{D}(\oplus \hspace {-0.08em} Z) = \oplus \hspace {-0.08em} c_z$s(this is easily possible, since D is finite).sLet s$\Sigma ^D$sbe the set containing exactly the following formulae. For any formulae X and Y occurring anywhere in D: sa.s$+\hspace {-0.1em}(c_{\neg X} \rightarrow \neg c_X)$sand s$+\hspace {-0.1em}(\neg c_X \rightarrow c_{\neg X})$s.sb.s$+\hspace {-0.1em}(c_{\neg \neg X} \rightarrow c_X)$sand s$+\hspace {-0.1em}(c_X \rightarrow c_{\neg \neg X})$s.sc.s$+\hspace {-0.1em}(c_{X \land Y} \rightarrow (c_X \land c_Y))$sand s$+\hspace {-0.1em}((c_X \land c_Y) \rightarrow c_{X \land Y})$s.sd.s$+\hspace {-0.1em} (c_X \rightarrow c_{\Diamond X})$s.se.s$+\hspace {-0.1em}(\Diamond c_{\Diamond X} \rightarrow c_{\Diamond X})$s.sNote that all formulae in s$\Sigma ^D$ssubstitute to S5-tautologies under the map s$c_X \mapsto X$s(i.e. the inverse of s$\pi ^D$s). The following lemma shows that these added assumptions suffice to rewrite the proof D under the translation s$\pi ^D$s.sLemma 4.3.sSuppose s$\Gamma \vdash ^+_D \varphi $swhere D does not use s$\Diamond $s-Elimination rules and s$\Gamma $scontains only strongly asserted formulae. Then there is a derivation s$D'$ssuch that s$\pi ^D[\Gamma ] \cup \Sigma ^D \vdash ^+_{D'} \pi ^D(\varphi )$sand s$D'$scontains no more applications of (Weak Inference) than D.sProof.sWe show by induction on the length n of D that every derivation D can be rewritten to a derivation s$D'$sas in the Lemma. The base case s$n=1$sis trivial since if D has length s$1$s, then s$\varphi \in \Gamma $sand hence also s$\pi ^D(\varphi ) \in \pi ^D[\Gamma ]$s.sSo let D be a derivation of length s$n> 1$s. By the induction hypothesis, we know that all proper subderivations of D can be rewritten as required for the Lemma. Hence, it suffices to show that the last rule applied in D can be rewritten as well. In the induction steps we will use derivations E that are proper subderivations of D. Without loss of generality, we can assume that s$\pi ^E \subseteq \pi ^D$sfor all such cases, i.e. that s$\pi ^E(\varphi ) = \pi ^D(\varphi )$sfor all s$\varphi $soccurring in E. (If s$\pi ^E$sis different, one only needs to apply an appropriate substitution.) In particular, this means that s$\Sigma ^E \subseteq \Sigma ^D$s. Also, we usually write s$+\hspace {-0.1em} c_X$s(s$\ominus \hspace {-0.08em} c_X$s, s$\oplus \hspace {-0.08em} c_X)$sfor s$\pi ^D(+\hspace {-0.1em} X)$s(s$\pi ^D(\ominus \hspace {-0.08em} X)$s, s$\pi ^D(\oplus \hspace {-0.08em} X)$s).sThe coordination principles require no rewriting aside from substituting s$c_X$sfor X. s•sIf the last rule applied in D is (Assertion) to conclude s$\oplus \hspace {-0.08em} X$s, then there is a shorter derivation E such that s$\Gamma \vdash ^+_E +\hspace {-0.1em} X$s. By the induction hypothesis, there is an s$E'$ssuch that s$\pi ^D[\Gamma ] \cup \Sigma ^D \vdash ^+_{E'} +\hspace {-0.1em} c_X$s.We then obtain s$D'$sfrom s$E'$sby a final application of (Assertion) to derive s$\oplus \hspace {-0.08em} c_X$sfrom s$+\hspace {-0.1em} c_X$s. Thus, s$\pi ^D[\Gamma ] \vdash ^+ \pi ^D(+\hspace {-0.1em} X)$s.s•s(Weak Inference), (Rejection), (SRs$_1$s) and (SRs$_2$s) can be immediately transformed like (Assertion).sThe other rules require some work and use the premisses added to s$\Sigma ^D$s. We only show a selection of cases, since the method is uniform. s•sThe clauses (a.), (b.) and (c.) of the construction of s$\Sigma ^D$scan be used to translate applications of s$(+\hspace {-0.1em}\land $sI.), s$(+\hspace {-0.1em}\land $sE.), (s$\oplus \hspace {-0.08em}\land $sI.), (s$\oplus \hspace {-0.08em}\land $sE.), s$(\ominus \hspace {-0.08em}\neg $sE.), s$(\ominus \hspace {-0.08em}\neg $sI.), (s$\oplus \hspace {-0.08em}\neg $sI.) and (s$\oplus \hspace {-0.08em}\neg $sE.). This is easy to check: the formulae in s$\Sigma ^D$sin combination with (s$+\hspace {-0.1em}\rightarrow $sE.) and (WMP) allow us to make the appropriate inferences. To illustrate the method, we cover the (s$\ominus \hspace {-0.08em}\neg $sE.) case: if D concludes with an application of (s$\ominus \hspace {-0.08em}\neg $sE.) to move from s$\ominus \hspace {-0.08em}\neg Z$sto s$\oplus \hspace {-0.08em} Z$s, there is a derivation E such that s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+_E \ominus \hspace {-0.08em} c_{\neg Z}$s. Then obtain s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+ \oplus \hspace {-0.08em} c_Z$sas follows:s•sIt is left to treat applications of (s$\oplus \hspace {-0.08em}\Diamond $sI.) and (s$+\hspace {-0.1em}\Diamond $sI.) in D. So suppose first that D concludes with an application of (s$\oplus \hspace {-0.08em}\Diamond $sI.) to move from s$\oplus \hspace {-0.08em} Z$sto s$\oplus \hspace {-0.08em}\Diamond Z$s. By the induction hypothesis, there is a derivation E such that s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+_E \oplus \hspace {-0.08em} c_Z$s. We then obtain s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+ \oplus \hspace {-0.08em} c_{\Diamond Z}$sas follows:sNext, suppose that in D concludes with an application of (s$+\hspace {-0.1em}\Diamond $sI.) to move from s$\oplus \hspace {-0.08em} Z$sto s$+\hspace {-0.1em}\Diamond Z$s. By the induction hypothesis, there is a derivation E such that s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+_E \oplus \hspace {-0.08em} c_Z$s. We then obtain s$\pi ^D[\Gamma ]\cup \Sigma ^D \vdash ^+ +\hspace {-0.1em} c_{\Diamond Z}$sas follows:sThis concludes the induction. □sNote that no applications of (Weak Inference) were added when translating D to s$D'$s, since (WMP) and (s$+\hspace {-0.1em}\rightarrow $sE.) can be derived in s$\mathsf {EML}^+$sfrom (s$\oplus \hspace {-0.08em}\land $sE.) without using (Weak Inference).sNow we are ready to prove the Soundness of the full calculus.sProof of Theorem 4.1sWe prove the statement of the theorem for s$\vdash ^+$s, which immediately entails the theorem. Without loss of generality, we may assume that s$\Gamma $scontains only strongly asserted formulae: if it contains s$\ominus \hspace {-0.08em} A$s, one can substitute with s$+\hspace {-0.1em}\Diamond \neg A$s, and if it contains s$\oplus \hspace {-0.08em} A$s, one can substitute with s$+\hspace {-0.1em}\Diamond A$s.sThe proof proceeds by induction on the number n of times that (Weak Inference) is applied in a derivation. The base case s$n=0$sis exactly Theorem 4.2.sSuppose that Soundness holds for all derivations D in which (Weak Inference) is applied less than n times. We want to show that derivations with n applications are sound. Let D be a derivation with n applications of (Weak Inference) and consider any subderivation s$D'$sthat ends in one such application. Note that we do not need to treat applications of (Weak Inference) to conclude s$\bot $s, since they are equivalent to the case in which B is s$p\land \neg p$sfor an arbitrary p. Thus, the local proof context is this:sIn this situation, s$D'$sdoes not use s$\Diamond $s-Elimination rules, and there is a finite subset s$\Gamma ' \subseteq \Gamma $ssuch that all formulae in s$\Gamma '$sare signed by s$+$s, such that s$\Gamma ',+\hspace {-0.1em} A \vdash ^+_{D'} +\hspace {-0.1em} B$sand s$\Gamma ' \vdash ^+ \oplus \hspace {-0.08em} A$s. To conclude the proof, it suffices to show that s$\tau [\Gamma '] \models ^{\mathbf {S5}} \tau (\oplus B)$s.sFor readability we will henceforth omit mentioning s$\tau $s, so that, say, s$\Gamma ' \models ^{\mathbf {S5}} +\hspace {-0.1em} A$sis understood to stand for s$\tau [\Gamma ']\models ^{\mathbf {S5}} \tau (+\hspace {-0.1em} A)$s. Since s$D'$scontains less than n applications of (Weak Inference), by the induction hypothesis we have that s(I1)s$$ \begin{align} \Gamma', +\hspace{-0.1em} A \models^{\mathbf{S5}} +\hspace{-0.1em} B \end{align} $$sand that s(I2)s$$ \begin{align} \Gamma' \models^{\mathbf{S5}} \oplus\hspace{-0.08em} A. \end{align} $$sThe proof that s$\Gamma ' \models ^{\mathbf {S5}} \Diamond B$snow proceeds in two steps. si.sWe show that s$\Gamma ' \models ^{\mathbf {S5}} \oplus \hspace {-0.08em} B$sif A and B are s$\mathcal {L}_{PL}$s-formulae and s$\Gamma '$scan be split in s$\Gamma ' = \Delta \cup \Theta $ssuch that: for all s$+\hspace {-0.1em} C \in \Delta $s, C is an s$\mathcal {L}_{PL}$s-formula; and for all s$+\hspace {-0.1em} D \in \Theta $s, s$D = \Diamond X \rightarrow X$sfor some s$\mathcal {L}_{PL}$s-formula X.sii.sBy Lemma 4.3, any other application of (Weak Inference) can be reduced to (i.).sSo, first assume that A and B are s$\mathcal {L}_{PL}$s-formulae and s$\Gamma ' = \Delta \cup \Theta $sas above. We need to show that for any model s$V = \langle W^V,R^V,V^V,w^V \rangle $sof s$\Gamma '$s, we have that s$V, w^V \Vdash \Diamond B$s, where s$\Vdash $sis the usual satisfaction relation for worlds in modal logic. Assume for reductio that there is a counterexample, i.e. a V with s$V,w^{V} \Vdash \Box \neg B$s. By the induction hypothesis and in particular (I2), we also have that s$V,w^V \Vdash \Diamond A$s. It follows that s$V,w^V \Vdash \Diamond (A\land \neg B)$s. Let s$v \in W^{V}$sbe a witness, i.e. s$V,v \Vdash A \land \neg B$s. Note that for all s$+\hspace {-0.1em} C \in \Delta $swe have that s$V,v^{V} \Vdash C$s, since s$V,w^{V} \Vdash \Box C$s.sNow consider the model s$V'$ssuch that: s$W^{V'} = \{v\}$s, s$V^{V'}(v) = V^{V}(v)$s, s$R^{V'} = \{(v,v)\}$s, s$w^{V'} = v$s. Note that all C with s$+\hspace {-0.1em} C \in \Delta $sare assumed to be s$\mathcal {L}_{PL}$s-formulae. That is, the fact that s$V,v^V \Vdash C$sis dependent only on the valuation s$V(v)$sand not on any other worlds in s$W^V$s. Thus it is also the case that s$V',w^{V'} \Vdash C$sfor all C with s$+\hspace {-0.1em} C \in \Delta $s. For the same reason, s$V',w^{V'} \Vdash A \land \neg B$s. Also note that, since s$V'$shas precisely one world, s$V',w^{V'} \Vdash \Diamond X$siff s$V',w^{V'} \Vdash X$s. So s$V',w^{V'} \Vdash \Diamond X \rightarrow X$sfor any X. Thus s$V' \Vdash \Theta $s.sHence s$V',w^{V'} \Vdash \Gamma '$s. By construction, s$V',w^{V'} \Vdash \Box A$sand s$V', w^{V'} \Vdash \neg \Box B$s. So s$V'$sis a countermodel to s$\Gamma '\cup \{+\hspace {-0.1em} A\} \models ^{\mathbf {S5}} +\hspace {-0.1em} B$s, but this is true by induction (I1). Contradiction. Thus there is no such V. This shows (i.).sFor (ii.), we relax our assumption so that A, B and the formulae in s$\Gamma '$smay be arbitrary. By Lemma 4.3, we have a derivation s$D''$ssuch that s$\pi ^{D'}[\Gamma '] \cup \Sigma ^{D'}, +\hspace {-0.1em} c_A \vdash ^+_{D''} +\hspace {-0.1em} c_B$s(writing s$+\hspace {-0.1em} c_A$sfor s$\pi ^{D'}(+\hspace {-0.1em} A)$sand same for B).sNote that since s$\pi ^{D'}$smaps everything to s$\mathcal {L}_{PL}$s-formulae, the elements of s$\pi ^{D'}[\Gamma '] \cup \Sigma ^{D'}$sare as described in (i): s$\pi ^{D'}[\Gamma '] \cup \Sigma ^{D'} = \Delta \cup \Theta $swith s$\Theta $sbeing exactly all formulae added in clause (e.) in the construction of s$\Sigma ^{D'}$s(Definition 1). Thus we obtain s$\pi ^{D'}[\Gamma '], \Sigma ^{D'} \models ^{\mathbf {S5}} \Diamond c_B$sby the argument of (i). Note that the argument of (i) rests on the induction hypothesis, but this is still licit here since s$D''$sdoes not contain more applications of (Weak Inference) than s$D'$s(by Lemma 4.3).sNow let s$\Sigma = (\pi ^D)^{-1}[\Sigma ^{D'}]$s. Since s$\mathbf {S5}$sis closed under uniform substitution, s$\Gamma '\cup \Sigma \models ^{\mathbf {S5}} \Diamond B$s. But s$\Sigma $scontains only s$\mathbf {S5}$s-tautologies (by inspection of Definition 1). Hence s$\Gamma ' \models ^{\mathbf {S5}} \Diamond B$s.□sIt is worth noting why the argument above does not work for the rules excluded from (Weak Inference), i.e. (s$+\hspace {-0.1em}\Diamond $sE.) and (s$\oplus \hspace {-0.08em}\Diamond $sE.). The reason is that translating these rules in the proof of Lemma 4.3 would require us to add s$+\hspace {-0.1em} (c_{\Diamond Z} \rightarrow c_Z)$sto s$\Sigma ^D$sin Definition 1. This, however, does not substitute to an s$\mathbf {S5}$s-tautology, so the final step in the soundness proof would fail.s4.2sCompletenesssWe now show that, modulo the translation s$\tau $sdefined above, s$\mathsf {EML}$sis also complete with respect to s$\mathbf {S5}$s.sTheorem 4.4s(Completeness).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{EML}$s-formulae and s$\varphi $san s$\mathcal {L}_{EML}$s-formula. If s$\tau [\Gamma ] \models ^{\mathbf {S5}} \tau (\varphi )$sthen s$\Gamma \vdash \varphi $s.sThis is shown by a model existence theorem. The construction of a canonical term model has to respect the difference between derivations that use s$\Diamond $s-Eliminations and those that do not. To this end, we need some additional definitions. We write s$\vdash ^*$sto denote provability in s$\mathsf {EML}$swithout the rules for s$\Diamond $s-Elimination. We say that a set s$\Gamma $sis S-consistent if s$\Gamma $scontains only strongly asserted formulae and s$\Gamma \not \vdash ^* \bot $s. And we say that s$\Gamma $sis S-inconsistent if it contains only strongly asserted formulae and s$\Gamma \vdash ^* \bot $s. Note that there are inconsistent S-consistent sets, e.g. s$\{+\hspace {-0.1em} p, +\hspace {-0.1em}\Diamond \neg p\}$s.sIn the typical canonical construction, one takes maximally consistent sets of formulae to be the worlds. We will instead take maximally S-consistent sets of formulae. In the construction, we shall use the following technical lemmas.sLemma 4.5.sIf s$\Gamma \cup \{+\hspace {-0.1em} A\}$sis S-inconsistent, then s$\Gamma \vdash ^* +\hspace {-0.1em}\neg A$s.sProof.sThis is just another way to write (s$+\hspace {-0.1em}\neg $sI.). □sLemma 4.6.sIf s$\Gamma $scontains only strongly asserted formulae and s$\Gamma \vdash +\hspace {-0.1em}((\bigwedge _{i < n} \neg B_i) \rightarrow \neg A)$s, then s$\Gamma \vdash +\hspace {-0.1em} \Diamond A\rightarrow (\bigvee _{i < n}\Diamond B_i)$s.sProof.sThis follows from the fact that s$\mathsf {EML}$sextends classical logic (Theorem 3.3) and proves all s$\mathbf {S5}$saxioms (Theorem 3.5). □sNow we are ready to demonstrate a model existence result.sTheorem 4.7s(Model Existence).sLet s$\Gamma $sbe a set of s$\mathcal {L}_{EML}$s-formulae. If s$\Gamma $sis consistent, then there is an s$\mathbf {S5}$s-model M such that s$M \models \tau [\Gamma ]$s.sProof.sLet s$Cl^+\hspace {-0.1em}(\Gamma )$sconsists of all strongly asserted formulae in the closure of s$\Gamma $sunder derivability s$\vdash $sin s$\mathsf {EML}$s. Concisely, s$Cl^+\hspace {-0.1em}(\Gamma ) = \{+\hspace {-0.1em} A \mid \Gamma \vdash +\hspace {-0.1em} A\}$s. Moreover, let s$\mathcal {E} = \{\Delta \mid \Delta $sis a maximal S-consistent extension of s$Cl^+\hspace {-0.1em}(\Gamma )\}$sand define a model s$M = \langle W,R,V\rangle $sas follows. s•s$W = \mathcal {E}$s.s•s$wRv$siff (for all s$+\hspace {-0.1em} A \in v : +\hspace {-0.1em}\Diamond A \in w$s).s•s$V(w) = \{p \mid +\hspace {-0.1em} p \in w\}$s.sNow we show by induction on the complexity of sentences A that: s$+\hspace {-0.1em} A \in w$siff s$M,w \Vdash A$s. The cases for atomic A and s$A = B \land C$sare straightforward, so we only cover negation and the modal. s•sIf s$+\hspace {-0.1em} \neg A \in w$s, then s$+\hspace {-0.1em} A \notin w$s. By the induction hypothesis, s$M, w \not \Vdash A$s. Thus s$M,w \Vdash \neg A$s. Conversely, if s$M,w \Vdash \neg A$s, then s$M,w \not \Vdash A$s, so s$+\hspace {-0.1em} A\notin w$sby the induction hypothesis. Since w is a maximally S-consistent set, this means that s$+\hspace {-0.1em} A$sis S-inconsistent with w. By Lemma 4.5, s$+\hspace {-0.1em} \neg A \in w$s.s•sSuppose s$+\hspace {-0.1em}\Diamond A \in w$s. We first show that s$+\hspace {-0.1em} A$sis S-consistent with s$Cl^+\hspace {-0.1em}(\Gamma )$s. Towards a contradiction, assume s$+\hspace {-0.1em} A$sis S-inconsistent with s$Cl^+\hspace {-0.1em}(\Gamma )$s. That is, s$Cl^+\hspace {-0.1em}(\Gamma ) \vdash +\hspace {-0.1em}\neg A$sby Lemma 4.5. Because s$Cl^+\hspace {-0.1em}(\Gamma )$sis closed under s$\vdash $s, this means that s$+\hspace {-0.1em}\neg \Diamond A \in Cl^+\hspace {-0.1em}(\Gamma )$sby (s$+\hspace {-0.1em}\Box $sI.). But then s$+\hspace {-0.1em}\Diamond A$sis S-inconsistent with s$Cl^+\hspace {-0.1em}(\Gamma )$s, hence s$+\hspace {-0.1em}\Diamond A \notin w$s. Contradiction.sThis establishes that there is a world (i.e. a maximally S-consistent extension of s$Cl^+\hspace {-0.1em}(\Gamma )$s) that contains s$+\hspace {-0.1em} A$s. We now show that one such world v is accessible from w, i.e. that s$wRv$s.sLet s$\{B_i \mid i \in \omega \}$sbe the sentences such that s$+\hspace {-0.1em}\Diamond B_i \notin w$s. We need a v such that s$+\hspace {-0.1em} A \in v$sand for all i, s$+\hspace {-0.1em} B_i \notin v$s. Note that if s$+\hspace {-0.1em}\Diamond B_i \notin w$s, then s$+\hspace {-0.1em}\Diamond B_i$sis S-inconsistent with w, so s$+\hspace {-0.1em}\neg \Diamond B_i \in w$s. In particular also s$+\hspace {-0.1em} \neg B_i \in w$ssince s$Cl^+\hspace {-0.1em}(\Gamma )$scontains Axiom T. Thus, s$Cl^+\hspace {-0.1em}(\Gamma ) \cup \{+\hspace {-0.1em} \neg B_i \mid i \in \omega \}$sis S-consistent, since it is a subset of w.sNow, if s$Cl^+\hspace {-0.1em}(\Gamma ) \cup \{+\hspace {-0.1em} \neg B_i \mid i \in \omega \} \cup \{+\hspace {-0.1em} A\}$sis S-consistent, there is a v as needed. Towards a contradiction, assume this set is S-inconsistent. By Lemma 4.5, this means that s$Cl^+\hspace {-0.1em}(\Gamma ) \cup \{+\hspace {-0.1em} \neg B_i \mid i \in \omega \} \vdash ^* +\hspace {-0.1em}\neg A$s. It follows by (s$+\hspace {-0.1em}\rightarrow $sI.) that there is a finite set of s$B_i$ss (with all s$i < n$s, without loss of generality) such that s$Cl^+\hspace {-0.1em}(\Gamma ) \vdash +\hspace {-0.1em}((\bigwedge _{i<n}\neg B_i) \rightarrow \neg A)$s. By Lemma 4.6, s$\Gamma \vdash +\hspace {-0.1em} \Diamond A\rightarrow (\bigvee _{i < n}\Diamond B_i)$s. Since s$\Diamond A \in w$sthis means that s$+\hspace {-0.1em}(\bigvee _{i < n}\Diamond B_i)\in w$s. But we saw that for any s$i<n$s, s$+\hspace {-0.1em}\neg \Diamond B_i \in w$s. Hence w is S-inconsistent. Contradiction.sThus, there is a s$v \in W$swith s$+\hspace {-0.1em} A \in v$sand s$wRv$s. By the induction hypothesis, s$M,v \Vdash A$s. Thus, s$M,w \Vdash \Diamond A$s.sConversely, suppose s$M,w \Vdash \Diamond A$s. Then there is a v, s$wRv$ssuch that s$M,v \Vdash A$s. By the induction hypothesis, s$+\hspace {-0.1em} A \in v$s. By definition of R, s$+\hspace {-0.1em}\Diamond A \in w$s.sLet s$w \in W$sbe arbitrary. Without loss of generality, we can write s$\Gamma $swith all formulae signed by s$+$s(see the proof of Theorem 4.1). Since s$\Gamma \subseteq w$s, it follows that s$M,w \Vdash \varphi $sfor each s$\varphi \in \tau [\Gamma ]$s.sIt remains to show that s$\langle W,R,V\rangle $sis an s$\mathbf {S5}$smodel. This follows from the fact that the s$\mathbf {KT5}$saxioms are contained in s$Cl^+\hspace {-0.1em}(\Gamma )$s.□sIntuitively, the reason why the worlds of the term model may denote inconsistent sets of formulae is as follows. The inference s$+\hspace {-0.1em} A \vdash +\hspace {-0.1em}\Box A$smust be excluded when computing maximal consistent sets. That is, if we were taking consistent sets (instead of S-consistent sets) as the worlds of the term model, then whenever s$+\hspace {-0.1em} A \in w$sfor some consistent w, it would also be the case that s$+\hspace {-0.1em}\Box A \in w$s. But such sets are not useful as worlds in a canonical model, since they can only see themselves according to the definition of R in the canonical model construction. The notion of S-consistency takes care of this issue.sOne may also wonder why the same argument does not work when we weaken the demand that s$\Gamma $sbe consistent in the statement of Theorem 4.7 to s$\Gamma $sbeing merely S-consistent. The stronger assumption of consistency is used in the step for s$+\hspace {-0.1em}\Diamond A$sin the induction on the complexity of sentences in the proof of the theorem. For this step of the proof relies on the fact that s$Cl^+\hspace {-0.1em}(\Gamma )$sis closed under the derivability relation in the full s$\mathsf {EML}$scalculus and in particular under (s$+\hspace {-0.1em}\Box $sI.). But if we were to allow inconsistent S-consistent s$\Gamma $s, then closure under the full s$\mathsf {EML}$scalculus would result in the trivial theory. Thus, only consistent s$\Gamma $shave a canonical model (but the worlds in this canonical model may be inconsistent sets).s4.3sSome corollariessThe following is a noteworthy corollary of the completeness theorem.sProposition 4.8.sFor any s$\mathcal {L}_{ML}$s-formula A and set of s$\mathcal {L}_{ML}$s-formulae s$\Gamma $s, s$\{\Box B \mid B\in \Gamma \} \models ^{\mathbf {S5}} \Box A$siff s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash +\hspace {-0.1em} A$s.sProof.sImmediate from Soundness and Completeness. □sThat is, the s$\mathsf {EML}$slogic of strong assertion is the logic that preserves s$\mathbf {S5}$s-validity. Schulz (2010, p. 389) noted the same result about Yalcin’s (2007) informational consequence (IC). That is, s$\Gamma \models ^{\text {IC}} A$siff s$\{\Box B \mid B\in \Gamma \} \models ^{\mathbf {S5}} \Box A$s. Thus, we may conclude, the logic of strong assertion coincides with informational consequence.sThere is a caveat, however. Yalcin’s semantics includes a conditional s$\rightarrow $sthat is not a material conditional, but a version of a restricted strict conditional (Yalcin, 2007, p. 998). Clearly, then, Yalcin’s informational consequence only preserves s$\mathbf {S5}$s-validity on the signature s$\{\neg ,\land ,\Diamond \}$s. Hence the s$\mathsf {EML}$slogic of strong assertion only corresponds to the nonimplicative fragment of informational consequence. In either logic, however, the material conditional is definable from s$\neg $sand s$\land $s.sNow, since in s$\mathbf {S5}$stautological truths are validities, Proposition 4.8 entails that the strongly asserted theorems of s$\mathsf {EML}$sare exactly the s$\mathbf {S5}$s-tautologies. This shows the converse direction of Theorem 3.5 to establish Theorem 4.9).sTheorem 4.9.s$\models ^{\mathbf {S5}} A$siff s$\vdash +\hspace {-0.1em} A$s.sFrom these results, we also obtain two corollaries about the logic of rejection in s$\mathsf {EML}$s.sProposition 4.10.sFor any s$\mathcal {L}_{ML}$s-formula A and finite set of s$\mathcal {L}_{ML}$s-formulae s$\Gamma $s, s$\{\Box B \mid B\in \Gamma \} \models ^{\mathbf {S5}} \Box A$siff s$\ominus \hspace {-0.08em} A\vdash \ominus \hspace {-0.08em}\bigwedge _{B \in \Gamma } B$s.sProposition 4.11.sFor any s$\mathcal {L}_{ML}$s-formula A, s$\Box A\models ^{\mathbf {S5}} \bot $siff s$\vdash \ominus \hspace {-0.08em} A$s.sProposition 4.11 mentions s$\Box A$s, whereas Theorem 4.9 does not. This difference is due to the fact that s$\models ^{\mathbf {S5}} A$siff s$\models ^{\mathbf {S5}} \Box A$s, whereas, in general, it is not the case that s$A \models ^{\mathbf {S5}} \bot $siff s$\Box A\models ^{\mathbf {S5}} \bot $s. A counterexample to the latter is obtained by letting A be s$p \land \Diamond \neg p$s.sBut what is the relation of s$\mathsf {EML}$sto the consequence relation s$\models ^{\mathbf {S5}}$swith premisses? We can approximate this relation from below by considering s$\vdash ^*$s, the derivability relation of s$\mathsf {EML}$swithout rules for s$\Diamond $s-Elimination.sProposition 4.12.sLet s$\Gamma $sbe a set of s$\mathcal {L}_{ML}$s-formulae and A an s$\mathcal {L}_{ML}$s-formula. If s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash ^* +\hspace {-0.1em} A$s, then s$\Gamma \models ^{\mathbf {S5}} A$s.sProof.sSince derivations in s$\mathsf {EML}$sare finite, we may suppose that s$\Gamma $sis finite. Assume that s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash ^* +\hspace {-0.1em} A$s. By (s$+\hspace {-0.1em}\rightarrow $sI.), this means that s$\vdash ^* +\hspace {-0.1em} \left(\left(\bigwedge _{B\in \Gamma } B\right) \rightarrow A\right)$s. By Theorem 4.9, s$\models ^{\mathbf {S5}} \Box \left(\left(\bigwedge _{B\in \Gamma } B\right) \rightarrow A\right)$s, which entails s$\models ^{\mathbf {S5}} (\bigwedge _{B\in \Gamma } B) \rightarrow A$sby Axiom T and modus ponens. □sHowever, we used s$\Diamond $s-Elimination rules to derive Axioms K and 5. Nonetheless, one can recapture S5 using the derivability relation s$\vdash ^{**}$sthat results from adding to s$\vdash ^*$sthe following restricted rule of s$\Diamond $s-Elimination.sThe restriction serves to recover the Necessitation rule in the proof of the following theorem.sTheorem 4.13.sLet s$\Gamma $sbe a set of s$\mathcal {L}_{ML}$s-formulae and A an s$\mathcal {L}_{ML}$s-formula. It is the case that s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash ^{**} +\hspace {-0.1em} A$siff s$\Gamma \models ^{\mathbf {S5}} A$s.sProof.sWe first prove the right-to-left direction. Inspection of the proofs in §3.4 reveals that s$\vdash ^{**}$sderives all S5 axioms. Furthermore, the following derivation shows that s$\vdash ^{**}$ssatisfies a version of the Necessitation rule, i.e. that if s$\emptyset \vdash ^{**} +\hspace {-0.1em} A$s, then s$\emptyset \vdash ^{**} +\hspace {-0.1em}\Box A$s.sThe application of (s$\oplus \hspace {-0.08em}\Diamond $sE.*) is legitimate here, since the (SRs$_2$s) derivation uses no premisses or assumptions other than the one being discharged and s$+\hspace {-0.1em} A$s(which is a theorem by assumption.)sNow, since s$\Gamma \models ^{\mathbf {S5}} A$sand S5 is complete with respect to its model theory, there is a natural deduction proof of A that requires only the premisses s$\Gamma $s, the S5 Axioms, the Necessitation rule and modus ponens. By the above, this proof can be performed in s$\vdash ^{**}$sto derive s$+\hspace {-0.1em} A$sfrom s$\{+\hspace {-0.1em} B \mid B \in \Gamma \}$s. This concludes the right-to-left direction.sFor left-to-right direction, the proof of Proposition 4.12 works, but the following step is nontrivial: s(*)sAssume that s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash ^{**} +\hspace {-0.1em} A$s. By (s$+\hspace {-0.1em}\rightarrow $sI.), this means that s$\vdash ^{**} +\hspace {-0.1em} \left(\left(\bigwedge _{B\in \Gamma } B\right) \rightarrow A\right)$s.sIt is not clear that (s$+\hspace {-0.1em}\rightarrow $sI.) can be applied here, since under s$\vdash ^{**}$sone may apply the rule (s$\oplus \hspace {-0.08em}\Diamond $sE.*), and we have not established that this rule is permitted in (s$+\hspace {-0.1em}\rightarrow $sI.). To see that the step (*) is nonetheless correct here, note that any occurrence of (s$\oplus \hspace {-0.08em}\Diamond $sE.*) can only be in an (SRs$_2$s) subderivation that establishes s$+\hspace {-0.1em} X$sfor some sentence X. Let R be the set of all s$+\hspace {-0.1em} X$sthat are established this way anywhere in the proof. Then the following version of (*) is obviously correct, as all subderivations using (s$\oplus \hspace {-0.08em}\Diamond $sE.*) can be replaced by a premiss from R. sAssume that s$\{+\hspace {-0.1em} B \mid B \in \Gamma \} \vdash ^{**} +\hspace {-0.1em} A$s. By (s$+\hspace {-0.1em}\rightarrow $sI.), this means that s$R \vdash ^{**} +\hspace {-0.1em} \left(\left(\bigwedge _{B\in \Gamma } B\right) \rightarrow A\right)$s.sBut all members of R are theorems under s$\vdash ^{**}$s, since they can be established by a Smileian reductio that does not require any side premisses. Thus the step (*) is correct. □s5sYalcin sentences and their generalizationss5.1sYalcin sentencessYalcin (2007) famously observed that sentences of the form p and it might not be that p sound bad even when occurring in certain embedded environments, such as suppositions, in which Moore-paradoxical sentences (p and I don’t believe that p) sound fine. The following proof in EML shows s$+\hspace {-0.1em} p \land \Diamond \neg p$sto be absurd.sThis proof shows that uttering the sentence p and it might not be that p immediately commits one to having incompatible attitudes (namely, assent to p and dissent from p), which is absurd. Thus it also explains why suppose that p and might not p sounds odd, as to suppose p and it might not be that p is to suppose something manifestly absurd. This is the argument we give in Incurvati & Schlöder (2019), but we can clarify why precisely it is absurd to suppose p and it might not be that p.sTo formalize supposition, we add a new primitive force-marker s$\mathcal {S}$sto our language. In English, supposition refers both to an attitude and to its expression (Green, 2000, pp. 377–378). That is, the speech act of supposing that A expresses the attitude of supposing that A. Accordingly, s$\mathcal {S}A$sstands for the speech act of supposing A, performed through locutions such as suppose that A, and expresses the attitude of supposing A.sFollowing Stalnaker (2014), pp. 150–151, we take the supposition of A to consists in a proposal to add A to the common ground, but to do so temporarily.s6sIn supposing that A, one is not committing to A, but is probing what happens if one were to commit to A. That is, one is checking what the consequences would be of adding A to the common ground.s7sFor this process to work as desired, the internal logic of supposition must be the same as the logic of strong assertion. This sanctions the coordination principle (s$\mathcal {S}$s-Inference), which states that the suppositional consequences of a suppositional context mirror the strongly assertoric consequences of the corresponding strongly assertoric context.s8sThis coordination principle immediately implies that suppose p and might not p is absurd. For as shown above, s$+\hspace {-0.1em} p\land \Diamond \neg p$sentails incompatible attitudes, which is absurd. By (s$\mathcal {S}$s-Inference), this means that supposing p and might not p is absurd as well, i.e. one derives s$\bot $sfrom s$\mathcal {S}(p\land \Diamond \neg p)$s.sThe absurdity of suppose p and might not p does not quite explain its infelicity, since not all absurd suppositions sound bad. One may felicitously suppose certain logical contradictions. For instance, someone not familiar with the derivability of Peirce’s Law in classical logic may felicitously suppose its negation. However, to see that the negation of Peirce’s Law is absurd requires a complex argument. By contrast, anyone grasping the meaning of and and not will immediately recognize the absurdity of, say, p and not p. This explains why p and not p sounds bad and continues to do so in embedded contexts. The same holds for p and might not p: its absurdity can be immediately inferred by applying the meaning-conferring rules of and, not and might. This absurdity is therefore manifest to anyone who grasps the meaning of these expressions, which explains why suppose p and might not p is infelicitous.sIn addition to explaining the infelicity of Yalcin sentences under suppose, our account has the resources to explain why Moore sentences sound bad in ordinary contexts but cease to do so in suppositional ones. For while strongly asserting p and I do not believe that p is improper, it is not absurd in the sense of s$+\hspace {-0.1em} p \land \neg Bp$sentailing s$\perp $s. In our view, uttering a Moore sentence is infelicitous because it violates one of the preparatory conditions of strong assertion, e.g. that one should know or believe what one strongly asserts. But such preparatory conditions do not factor into the commitments undertaken by a strong assertion. Thus, the violation of these preparatory conditions does not proof-theoretically reduce to the speaker having both strongly asserted and rejected the same proposition. Hence, suppose that p and I do not believe that p is not predicted to be absurd. Since supposition and strong assertion have different preparatory conditions—in particular, one need not believe what one supposes—suppose that p and I do not believe that p is not predicted to be improper either.sThe rule (s$\mathcal {S}$s-Inference) is sufficient to explain the relevant data about Yalcin sentences, but to give a complete account of supposition further rules may need to be added. For our present project, there are more immediate concerns.s5.2sGeneralized Yalcin sentencessPaolo Santorio (2017) observed that sentences like (3) seem to sound as bad as the original Yalcin sentences, and continue to do so when embedded under suppose. s(3)s(If p, then it might be that q) and (if p, then s$\neg $sq).sWe have seen that there are good reasons to treat Yalcin sentences as semantically contradictory. But then, Santorio argues, one should also treat sentences like (3) as semantically contradictory. However, it is difficult to find a conditional s$>$sand a consequence relation s$\models $ssuch that s$p>\Diamond q \land p>\neg q \models \bot $s. This is because if s$>$scan be vacuous, then the incompatibility of s$\Diamond q$sand s$\neg q$sdoes not force us to conclude s$\perp $s, but only that p satisfies the vacuity-condition for s$>$s.sSantorio (2017) succeeds in defining such a s$>$sand s$\models $s, but we want to offer a different diagnosis of the problem posed by (3), one which does not require adopting a revised notion of consequence. Observe that an utterance of (4), which does not contain an epistemic modal, already sounds odd. s(4)s(If p, then q) and (if p, then s$\neg $sq).sThe reason why (3) and (4) both sound odd seems to be similar. To wit, both utterances appear to prompt one to suppose that p, i.e. to consider what follows should p be the case—but to do so is absurd, given the information the utterances contain. That is, we will argue that (3) and (4) sound bad for the same reason that (5a) and (5b) sound bad—their antecedents cannot be supposed. s(5)sa. If (p and not p), then q.sb. If (p and it might not be p), then q.sWe propose to explain the infelicitousness of (3), (4) and (5) by using a notion of supposability based on our characterization of supposition s$\mathcal {S}$s. We have followed Stalnaker in taking the supposition of A to be a proposal to temporarily update the common ground with A. The supposability of A in a given context is the possibility of supposing A in that context.sDefinition 2.sLet A and C be a s$\mathcal {L}_{ML}$s-formulae. We say that A is supposable in (context) C if s$\mathcal {S}(C\land A) \not \vdash \bot $s.sAs we explain below, a strong assertion of an indicative conditional pragmatically presupposes the supposability of the conditional’s antecedent in a context C that corresponds to the current common ground updated with the strong assertion’s content. So, in the case of (3), the supposability of p is to be checked with respect to a context C that contains at least s$(p> \neg q) \land (p> \Diamond q)$s. But we have that s$\mathcal {S} (p \land (p> \neg q) \land (p> \Diamond q)) \vdash \bot $s(assuming that s$>$ssatisfies modus ponens), so the presupposition fails.sWhy should the indicative conditional have such a presupposition? Following Stalnaker (1978), the pragmatic presuppositions of a strong assertion include all the information that can be inferred from the performance of the strong assertion itself. In particular, a strong assertion presupposes that the context is such that its essential effect (i) changes the context in a nontrivial and well-defined way and (ii) everyone in the conversation can compute this change. Now, when one proposes to update the common ground with a conditional, one proposes to change it in such a way that if the antecedent is added to the common ground, its consequent should be added too. Everyone must be able to compute what this change amounts to, as this is what they base their decision to accept or reject the update proposal on. Thus, everyone must be able to consider the common ground updated with the conditional and then temporarily (for the purpose of deliberation) add the conditional’s antecedent and arrive at a well-defined result. This may be seen as a discoursive analogue of the Ramsey test. Hence, strong assertions of conditional content pragmatically presuppose that the conditional’s antecedent is supposable in the context of the current common ground updated with the assertion’s content. But this presupposition cannot be met for the conditionals (3), (4) and (5). Many have claimed that conditionals presuppose, in some sense, that their antecedents are possible (Gillies, 2010; Mandelkern & Romoli, 2017; Crespo, Karawani, & Veltman, 2018). Our argument shows that supposability is the right way to specify what kind of possibility should be meant here, at least within the Stalnakerian framework.sOne might reply that our pragmatic explanation is not general enough. For Santorio’s generalized Yalcin sentence (3) also sounds bad when the conditionals it contains are read as subjunctives. And, the reply goes, the assertion of a subjunctive conditional does not have the supposability presupposition associated with the assertion of its indicative counterpart. While indicative conditionals change the common ground in a way that can be evaluated by provisionally updating the common ground with their antecedents, subjunctive conditionals change the common ground in a way that can be evaluated by provisionally revising the common ground with their antecedent (Stalnaker, 1968). Thus, the strong assertion of a subjunctive conditional does not presuppose that its antecedent be supposable in the common ground updated with the strong assertion’s content, since some of this content might be revised in order to suppose the antecedent.sThe reply is unsuccessful. Although the assertion of a subjunctive does not have the same supposability presupposition as the assertion of the corresponding indicative, it does have a supposability presupposition. And this presupposition cannot be met when Santorio’s (3) is read as a subjunctive conditional. In particular, since not all information in the common ground is up for revision when considering a subjunctive antecedent, the assertion of a subjunctive conditional presupposes that its antecedent be supposable in C, where C is the nonrevisable part of the common ground updated with the conditional. Now consider the strong assertion (if it were p, then q) and (if it were p, then not q). Clearly, p is not supposable in the context s$C'$sthat results from updating C with the strong assertion, since s$\mathcal {S}(C' \land p) \vdash \mathcal {S}(p\land \neg p)$s, which entails s$\bot $s. The same goes for (if it were p, then q) and (if it were p, then it might be that not q): p is not supposable in the context s$C'$sthat results from updating C with this strong assertion, since s$\mathcal {S}( p\land C') \vdash \mathcal {S}(p\land \Diamond \neg p)$s, which entails s$\bot $s.sOne may wonder how this pragmatic explanation of the infelicity of generalized Yalcin sentences can account for their infelicity when embedded under suppose. After all, the special problem raised by Yalcin sentences is that, unlike Moore sentences, they continue to sound bad under suppose and similar environments. This, the usual story goes, precludes a pragmatic explanation of their infelicity similar to the one given for Moore sentences, since pragmatic inferences are suspended under suppose.sThis story overplays the power of suppose to suspend pragmatic inferences: although the pragmatic inferences used to explain the infelicity of Moore sentences are suspended under suppose, it does not follow that all such inferences are. The pragmatic inference used to explain the infelicity of Moore sentences is suspended under supposition because while it seems a preparatory condition for strong assertion that one ought to believe what one strongly asserts, there is no analogue preparatory condition for supposition. By contrast, the pragmatic inference we outlined in the previous paragraphs clearly goes through under supposition. In particular, just as the actual update of the common ground with a conditional is only well-defined if its antecedent is supposable in the right context C, so is the temporary update with the same conditional only well-defined if its antecedent is supposable in C. That is, this supposability requirement—unlike the requirement that one ought to believe the content of the speech act—is shared by strong assertions and suppositions of conditionals alike.sThere are some further generalized Yalcin sentences that we can explain using this suppositional strategy. Matthew Mandelkern (2019) observed that sentences like (6a) and (6b) seem to sound as bad as the original Yalcin sentences. s(6)s#a. (p and it might not be that p) or (q and it might not be that q)s#b. might (p and it might not be that p)sHowever, on our account such sentences are not absurd: there are models of s$\mathsf {EML}$sin which (7a) and (7b) hold and hence neither sentence entails s$\bot $s(on the assumption that neither A nor B are classical contradictions). s(7)sa. +((A s$\land \Diamond \neg $sA) s$\lor $s(Bs$\land \Diamond \neg $sB)).sb. +s$\Diamond $s(A s$\land \Diamond \neg $sA).sSimilarly to Santorio’s cases, Mandelkern’s cases can only be accounted for semantically by making substantial revisions to classical logic. Any attempt to add further rules to s$\mathsf {EML}$sso that (7a) and (7b) entail s$\bot $swould trivialize the epistemic possibility modal.sTheorem 5.1.sSuppose that one of the following is the case. s(a)s$+\hspace {-0.1em} ((A\land \Diamond \neg A)\lor (B\land \Diamond \neg B)) \vdash \bot $sfor all s$A, B$s.s(b)s$+\hspace {-0.1em}\Diamond (A\land \Diamond \neg A) \vdash \bot $sfor all A.sThen for any A, s$+\hspace {-0.1em} \Diamond A \vdash +\hspace {-0.1em} A$s.sProof.sSuppose that (a) is the case, letting A be p and B be s$\neg p$s. That is, we have that s$+\hspace {-0.1em} ((p \land \Diamond \neg p) \lor (\neg p \land \Diamond p)) \vdash \bot $s. By Smileian reductio, this means that s$\vdash \ominus \hspace {-0.08em} ((p \land \Diamond \neg p) \lor (\neg p \land \Diamond p))$s, which by (s$\oplus \hspace {-0.08em}\neg $sI.) means that s$\vdash \oplus \hspace {-0.08em} \neg ((p \land \Diamond \neg p) \lor (\neg p \land \Diamond p))$s.sBy De Morgan, s$\neg ((p \land \Diamond \neg p) \lor (\neg p \land \Diamond p))$scan be written as s$(\neg p \lor \neg \Diamond \neg p) \land (p \lor \neg \Diamond p)$s, which is classically equivalent to s$\neg \Diamond \neg p \lor \neg \Diamond p$s. This sentence can be rewritten as s$\Diamond p \rightarrow \Box p$s. Hence, (a) implies that s$\vdash \oplus \hspace {-0.08em} (\Diamond p \rightarrow \Box p)$s. But then, we can derive s$+\hspace {-0.1em} p$sfrom s$+\hspace {-0.1em}\Diamond p$sas follows.sFor the second part, suppose that (b) is the case, letting A be s$\neg p$s. That is, we have that s$+\hspace {-0.1em} \Diamond (\neg p \land \Diamond p) \vdash \bot $s. By Smileian reductio, this means that s$\vdash \ominus \hspace {-0.08em} \Diamond (\neg p \land \Diamond p)$s. By (s$\oplus \hspace {-0.08em}\neg $sI.), this is equivalent to s$\vdash \oplus \hspace {-0.08em}\neg \Diamond (\neg p \land \Diamond p)$s. By Lemma 3.7, it follows that s$\vdash +\hspace {-0.1em} \neg (\neg p \land \neg \Diamond p)$s, which by Classicality is equivalent to s$\vdash +\hspace {-0.1em} \Diamond p\rightarrow p$s.□sOne might take issue with Lemma 3.7 here, but inspection of the proof of the lemma shows that it rests on well-motivated assumptions. Challenging the application of (Weak Inference) in the proof of (a) would not help, since the proof of (b) does not require this principle. Thus, we cannot semantically account for these generalized cases without giving up on classical logic. Mandelkern’s (2019) account avoids these results by denying the universal validity of the relevant classically valid transformations (e.g. the application of De Morgan in the above proof).sInstead of making deep revisions to our semantics, we again provide a pragmatic explanation of the relevant data. We explain the infelicitousness of (6a) by taking the strong assertion of A or B to presuppose the supposability of A and the supposability of B. And we explain the infelicitousness of (6b) by taking the strong assertion of might A to pragmatically presuppose the supposability of A. The pragmatic inferences from A or B and might A to supposability are evinced by the felicitousness of sequences like the following. s(8)sa. It is either p or q. So suppose that it in fact the case that p. b. It might be that p. So suppose that it in fact the case that p.sAt this point, one might suggest taking the inferences from A or B and might A to supposability to be not pragmatic, but semantic. In particular, one might identify the meaning of might with supposability and the meaning of A or B with might A and might B.s9sHowever, supposition can be counterfactual, so having asserted not A one may go on to suppose that A, possibly just for the sake of argument. By contrast, having asserted not A, it is a mistake to also assert might A. Thus might cannot be identified with supposability.sPragmatically explaining the data in (6) has also an empirical advantage over Mandelkern’s (2019) own semantic approach. He claims that the infelicitousness of (6a) is explained by the fact that Yalcin sentences are classical contradictions and that disjunctions of classical contradictions are themselves classical contradictions. But now consider (9). s(9)s(p and it might not be that p) or q.sIf q is true, then according to the usual truth-functional meaning of disjunction (which Mandelkern does not dispute), (9) is true, since it has a true disjunct. However, (9) sounds odd. Thus Mandelkern would seem to need some further mechanism to explain the oddity of (9), e.g. our pragmatic presupposition or another principle entailing that disjunctions one of whose disjuncts is a classical contradiction sound generally bad. But then, any such mechanism would also account for the infelicitousness of (6). Hence, the more parsimonious approach is to stick with classical consequence and explain both (6) and (9) pragmatically, as we have done.s6sConsequence, credence and commitmentsMoritz Schulz (2010) presented an objection to informational consequence (and hence s$\mathsf {EML}$s’s notion of consequence) as an account of logical consequence. Schulz argues that, in situations of uncertainty, it may be rational to assign a high credence to A but a low credence to it must be the case that A. This is because one’s evidence for A need not rule out not A and hence need not be evidence for must A. Consider, for instance, a situation in which one sees that the lights are on. On the basis of this evidence, one might (rationally) assign a high credence to (10a). However, says Schulz, one cannot rule out that they forgot to switch off the lights. Hence one must assign a low credence to (10b). s(10)sa. They are at home.sb. They must be at home.sHowever, in s$\mathsf {EML}$s, s$+\hspace {-0.1em}\Box A$sis derivable from s$+\hspace {-0.1em} A$sand hence (10b) follows from (10a), at least assuming the obvious formalization of these sentences. Thus, Schulz continues, s$\mathsf {EML}$s-consequence, like informational consequencesclearly violates a reasonable constraint on logical consequence: If a rational and logically omniscient subject’s credence function P is such that s$P (\phi ) = t$s, and s$\phi \models \psi $s, then s$P(\psi ) \geq t$s, i.e. if we assign to a statement s$\phi $ssubjective probability t, and we are certain that s$\psi $sfollows logically from s$\phi $s, then we should assign to s$\psi $sa subjective probability at least as high as t. After all, we know that the former cannot hold without the latter. (Schulz, 2010, p. 388)sSchulz concludes that epistemic strengthening (the inference from A to must A) is invalid.sMore recently, Justin Bledin and Tamar Lando (2018) have considered cases similar to (10). One goes as follows. It is the run-up to the 1980 US elections and, according to the polls, Reagan will win by a landslide, Carter will come second and Anderson will be third by a wide margin. On the basis of this evidence, you come to believe (11a). But given that you cannot rule out that Carter will win, it would seem wrong for you to believe (11b) on the basis of the same evidence. s(11)sa. Carter will not win the electionsb. It is not the case that Carter might win the electionsHowever, in s$\mathsf {EML}$s, s$+\hspace {-0.1em} \neg \Diamond A$sis derivable from s$+\hspace {-0.1em} \neg A$s—an inference step also known as Łukasiewicz’s principle. Unlike Schulz, however, Bledin and Lando do not conclude that the relevant inference should be regarded as invalid. Rather, they say that philosophers face a choice between rejecting Łukasiewicz’s principle, rejecting Justification with Risk—i.e. the claim that there are cases in which one can justifiably believe s$\neg A$sbut not s$\neg \Diamond A$s—and rejecting Single-Premiss Closure—i.e. the principle that if one is justified in believing s$\phi $sand one comes to believe that s$\psi $sby competently deducing s$\psi $sfrom s$\phi $s, then one is justified in believing s$\psi $s(a nonprobabilistic variant of Schulz’s constraint).sBledin and Lendo present some arguments to the effect that Łukasiewicz’s principle and Justification with Risk hold. A philosopher persuaded by those arguments, they conclude, needs to give up Single-Premise Closure. However, Bledin and Lando do not offer any positive reason for thinking that Single Premise Closure does not hold.sThere is a familiar strategy for dealing with this sort of cases. According to this strategy, we should give up Justification with Risk (or its equivalent for epistemic strengthening) and explain away the troublesome cases by appealing to the idea that, when the possibility of error is made salient, this brings about a change of context (DeRose, 1991) or of the standards of precision in play (Moss, 2019). Thus, in the Schulz case (10), one should assign a high credence to they are at home because one sees that the lights are on. When the possibility that they forgot to switch off the lights is raised, one should not assign a high credence to they must be at home. But then, in those circumstances, one should not assign a high credence to they are at home either.sHowever, under Schulz’s assumptions that epistemic modals can be assigned probabilities and that probabilities obey the classical probability calculus, one can show that this response is inadequate. In particular, it follows from these assumptions that Schulz’s constraint on logical consequence is incompatible with any logic of epistemic modality which treats not p and it might be that p as a contradiction. For if s$\neg p \land \Diamond p \models \bot $s, then according to Schulz’s constraint, s$P(\neg p \land \Diamond p) \leq P(\bot ) = 0$s. Since probabilities are positive, s$P(\neg p \land \Diamond p) = 0$s. Thus, if one believes that it might be that p, i.e. s$P(\Diamond p) = 1$s, it follows that s$P(\neg p) = 0$s, i.e. s$P(p) = 1$sby the law of total probability.s10sSchulz himself (2010, § 3) favors a pragmatic explanation of Yalcin sentences. According to this explanation, when you strongly assert A, you rule out all not-A worlds. Thus, in it is raining and it might not be raining, the domain of the epistemic modal in the second conjunct is restricted to rain-worlds. So there is no way, once you have strongly asserted that it is raining, that you can also strongly assert that it might not be. However, this explanation predicts that it might not be raining and it is raining should be felicitous, which does not appear to be the case.sSo what are the options for the defender of informational consequence, who wants to insist that Yalcin sentences are, indeed, contradictions? She could try to develop a nonclassical probability calculus (see Williams, 2016 for a survey). Or she could reject Schulz’s constraint on logical consequences and hence, arguably, also Single-Premiss Closure. We explore the former option in ongoing work. Here we exploit features of s$\mathsf {EML}$sand our proof-theoretic approach to show that the prospects for the latter option are brighter than they might appear at first sight.sBledin & Lando (2018, p. 19) argue that, even if they reject the principle of Single-Premiss Closure, defenders of informational consequence can still accept the principle restricted to nonmodal formulae. However, there are inferences involving epistemic modals that are compatible with Schulz’s constraint and hence satisfy Single-Premiss Closure. A case in point is epistemic weakening (that is, the inference from must A to A). Epistemic weakening satisfies Schulz’s constraint: the credence rationally assigned to A can never be lower than the credence assigned to must A. For, if Schulz is correct, one’s evidence for must A is evidence that rules out not A, and evidence that rules out not A is also evidence for A. Thus, the portion of informational consequence that respects Schulz’s constraint is not exhausted by its nonmodal fragment.sOur proof-theoretic account allows us to isolate an evidence-preserving fragment of informational consequence that validates more inferences than merely the nonmodal ones. This is the fragment s$\vdash ^*$sof s$\mathsf {EML}$sconsequence that one obtains by removing the s$\Diamond $s-Elimination rules from s$\mathsf {EML}$s. By inspecting the proofs of epistemic strengthening and epistemic weakening, one finds that the former is not derivable in s$\vdash ^*$s, whereas the latter is. The rules for s$\Diamond $s-Elimination are also required to derive that s$+\hspace {-0.1em} (p\land \Diamond \neg p) \vdash \bot $s, which we demonstrated to be incompatible with Schulz’s constraint as well (again, assuming the classical probability calculus).sAs we note in Incurvati & Schlöder (2019, pp. 760–762), the use of s$\Diamond $s-Elimination means that there may be a loss of specificity when going from the premisses of an inference to the conclusion. That is to say, we can go from specific premisses to an unspecific conclusion, which may result in a loss of evidence. This is the reason why we restricted the subderivation in (Weak Inference) to cases in which the s$\Diamond $s-Elimination rules are not applied. This suggests that while s$\mathsf {EML}$sdoes not preserve evidence, the fragment s$\vdash ^*$sdoes.sBut if inference in s$\mathsf {EML}$sdoes not preserve evidence, why think that this is a suitable notion of inference at all? We contend that even if inference in s$\mathsf {EML}$sdoes not preserve evidence, it does preserve commitment. As mentioned in §2, a derivation in s$\mathsf {EML}$scomputes which attitudes someone is committed to in virtue of their having the attitudes in the premisses. In speech act terms, a derivation in s$\mathsf {EML}$scomputes what someone is committed to accepting in the common ground given the public stances that they have taken on the acceptability of certain propositions into the common ground. This is compatible with consequence in s$\mathsf {EML}$snot preserving evidence and hence with the failure of Singe-Premise Closure.s7sConclusion and further worksWe have described a general framework for the proof-theoretic study of epistemic modality. We have restricted our presentation to the interaction of epistemic modality with the classical Boolean connectives. In ongoing work, we apply the strategy of placing principled proof-theoretic constraints on the rules governing epistemic modal operators to problems going beyond these connectives.sSeveral authors have noted that principles like classical reductio fail in the presence of epistemic vocabulary (e.g. Bledin, 2014): it might not be raining and it is raining sound contradictory, but it is mistaken to apply classical reductio to derive it is not raining from it might not be raining. In s$\mathsf {EML}$s, reductio is only applicable when certain proof-theoretic restrictions are met, which gives one the tools to account for these problems. But the problems of epistemic modality are not confined to propositional logic. In §5, we outlined some possible extensions of s$\mathsf {EML}$sto the study of indicative conditionals and the concept of supposition. In addition, there are well-known puzzles regarding the interaction of quantifiers and epistemic modals (Aloni, 2001, 2005), recently brought to renewed attention by Ninan (2018). Naïve applications of first-order logic to epistemic modality license defective inferences like every card might be a losing card; therefore, the winning card might be a losing card. EML opens up the strategy of blocking such inferences via proof-theoretic restrictions on the use of epistemic modals under quantification.s

Journal

The Review of Symbolic LogicCambridge University Press

Published: Jun 1, 2022

Keywords: 03B65; epistemic modals; multilateral logic; proof-theoretic semantics; weak assertion; rejection

References