Hostname: page-component-54dcc4c588-5q6g5 Total loading time: 0 Render date: 2025-09-11T17:28:15.383Z Has data issue: false hasContentIssue false

LOGICS OF LOGICS

Published online by Cambridge University Press:  22 August 2025

MICHAEL BEVAN*
Affiliation:
PHILOSOPHY DEPARTMENT https://ror.org/02ttsq026 UNIVERSITY OF COLORADO BOULDER BOULDER, CO 80309 USA
Rights & Permissions [Opens in a new window]

Abstract

We investigate a system of modal semantics in which $\Box \phi $ is true if and only if $\phi $ is entailed by a designated set of formulas by a designated logics. We prove some strong completeness results as well as a natural connection to normal modal logics via an application of some lattice-theoretic fixpoint theorems. We raise a difficult problem that arises naturally in this setting about logics which are identical with their own ‘meta-logic’, and draw a surprising connection to recent work by Andrew Bacon and Kit Fine on McKinsey’s substitutional modal semantics.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of The Association for Symbolic Logic

We give a semantics for modal logic in which $\Box \phi $ is true when $\phi $ is entailed by a designated set of formulas according to a designated logic. This is a logical interpretation of the box in the vein of Carnap [Reference Carnap3] and McKinsey [Reference McKinsey5]. On Carnap’s semantics, $\Box \phi $ is true when $\phi $ is true on all admissible interpretations. In McKinsey’s, $\Box \phi $ is true when every admissible substitution instance of $\phi $ is true. Such systems of semantics are interesting for two reasons. First, they shed light on logically important properties of formulas like semantic validity and substitutional validity. Second, they shed light on theories of necessity which would reduce modality to such properties. The second reason seems to have motivated Carnap and McKinsey. Carnap aimed to substantiate a view on which necessity is semantic validity. McKinsey aimed to substantiate a view on which necessity is substitutional validity. The first reason is motivation enough, however. Even if we do not share the ambitions of a modal reductionist, the modal behavior of logical truth is an interesting topic which it falls to logicians to investigate. Thus, even if we reject equating necessity and semantic validity, we can accept Carnap’s argument that the logic of semantic validity is $\mathsf {S5}$ . Even if we reject equating necessity and substitutional validity, we can accept McKinsey’s argument that the logic of substitutional validity is a normal extension of $\mathsf {S4M}$ . Likewise the present semantics is interesting for two reasons. It sheds general light on the characteristics of logics of logics, and it sheds light on theories of necessity which would reduce modality to derivability from given assumptions within a given logic—Sider’s Humeanism [Reference Sider6] being an example. Even if we dismiss such a reduction, logics of logics are of interest in their own right. In §1, I discuss the semantics in a pure way, establishing results relating to completeness, and pointing out a connection to recent work on McKinsey’s semantics by Bacon and Fine [Reference Bacon, Fine, Ferrari, Brendel, Carrara, Hjortland, Sagi, Sher and Steinberger2]. In §2, I illustrate an application of the semantics by discussing Sider’s views, raising some objections and suggesting fixes, using some ideas developed in the first part.Footnote 1

1 The framework and connections.

1.1 Basic notions.

Formulas are constructed from a denumerable set of sentence letters At $=\{p, q, r, \ldots \}$ and the constant $\bot $ by the connectives $\to $ and $\Box $ ; $\neg \phi $ abbreviates $(\phi \to \bot )$ and $\top $ abbreviates $\neg \bot $ . A logic is a set $\lambda $ of formulas that contains all classical tautologies, and which is closed under modus ponens and uniform substitution. The theorems of a logic are its elements. The smallest logic is called $PC$ , and just consists of all uniform substitution instances of all classical tautologies. Every logic defines a consequence relation: $\Gamma \vdash _\lambda \phi $ if and only if $\phi \in \lambda $ or $\exists \psi _1, \ldots , \psi _n\in \Gamma $ such that $(\psi _1\to (\psi _2\to \ldots (\psi _n\to \phi )\ldots )\in \lambda $ . We abbreviate $\varnothing \vdash _\lambda \phi $ to $\vdash _\lambda \phi $ . A set of formulas $\Gamma $ is $\lambda $ -consistent when $\Gamma \not \vdash _\lambda \bot $ , and maximal $\lambda $ -consistent when it additionally has no $\lambda $ -consistent extension. Where $\Gamma $ is a set of formulas, we write $\Box \Gamma =\{\Box \phi :\phi \in \Gamma \}$ and $\Box ^{-}\Gamma =\{\phi :\Box \phi \in \Gamma \}$ . Note that for all $\Gamma $ we have $\Box \Box ^{-}\Gamma \subseteq \Gamma $ . Where $X_1, \ldots , X_n$ are formulas, sets of formulas, or rules of inference, we write $X_1+\cdots +X_n$ to denote the smallest logic that contains or includes or admits every $X_i$ . I often use serif font for named formulas and rules and sans serif font for named logics. An interpretation is a triple $I = (M, \Gamma , \lambda ),$ where M is a set of sentence letters, $\Gamma $ is a set of formulas, and $\lambda $ is a logic. Satisfaction is defined inductively:

$$ \begin{align*} \begin{array}{l} \displaystyle M, \Gamma, \lambda\not\vDash \bot\\ \displaystyle M, \Gamma, \lambda\vDash p \text{ if and only if } p\in M \\ \displaystyle M, \Gamma, \lambda\vDash \phi\to\psi \text{ if and only if } M, \Gamma, \lambda\not\vDash \phi \text{ or } M, \Gamma, \lambda\vDash\psi\\ \displaystyle M, \Gamma, \lambda\vDash \Box\phi \text{ if and only if } \Gamma\vdash_\lambda\phi. \end{array} \end{align*} $$

In an interpretation $(M, \Gamma , \lambda )$ , $\Gamma $ is called the set of assumptions and $\lambda $ is called the logic of evaluation. We write $I\vDash \Delta $ iff $I\vDash \psi $ for all $\psi $ in $\Delta $ . Then $\mathfrak {I}_\Delta := \{I:I\vDash \Delta \}$ . For $\mathfrak {I}$ a set of interpretations, write $\Gamma \vDash _{\mathfrak {I}}\phi $ when for all I in $\mathfrak {I}$ , if $I\vDash \Gamma $ then $I\vDash \phi $ . We abbreviate $\varnothing \vDash _{\mathfrak {I}}\phi $ to $\vDash _{\mathfrak {I}}\phi $ . When $\mathfrak {I}$ is the set of all interpretations, we abbreviate to $\Gamma \vDash \phi $ and $\vDash \phi $ . The canonical model of $\Gamma $ with respect to $\lambda $ is:

$$ \begin{align*} I[\Gamma, \lambda] := (\text{At}\cap \Gamma, \Box^{-}\Gamma, \lambda). \end{align*} $$

1.2 General completeness and definability

We now establish a general completeness theorem, relying on two lemmas about canonical models. In these results, we make use of a generalised notion of normality. Namely, we say that a logic $\sigma $ is normal for a logic $\lambda $ if and only if $\Gamma \vdash _\lambda \phi $ always implies $\Box \Gamma \vdash _\sigma \Box \phi $ . A logic is normal in the usual sense when it is normal for itself. The two lemmas and resulting completeness theorem now follow the following proposition.

Proposition 1. If $\sigma $ is normal for $\lambda $ and $\Gamma $ is maximal $\sigma $ -consistent, $I[\Gamma , \lambda ]\vDash \Gamma $ .

Proof. By induction on formula complexity show that $I[\Gamma , \lambda ]\vDash \phi $ iff $\phi \in \Gamma $ . We give the induction step for $\Box $ . If $I[\Gamma , \lambda ]\vDash \Box \psi $ then $\Box ^{-}\Gamma \vdash _\lambda \psi $ , which means $\Box \Box ^{-}\Gamma \vdash _{\sigma }\Box \psi $ . Hence $\Gamma \vdash _{\sigma }\Box \psi $ , so that $\Box \psi \in \Gamma $ . Conversely, if $\Box \psi \in \Gamma $ then $\psi \in \Box ^{-}\Gamma $ so that $I[\Gamma , \lambda ]\vDash \Box \psi $ .

Proposition 2. Suppose that $\sigma $ is normal for $\lambda $ and $I[\Delta , \lambda ]\in \mathfrak {I}$ for every maximal ${\sigma }$ -consistent set $\Delta $ . Then $\Gamma \vDash _{\mathfrak {I}}\phi $ implies $\Gamma \vdash _\sigma \phi $ for all $\Gamma $ and $\phi $ .

Proof. If $\Gamma \not \vdash _{\sigma }\phi $ then by Lindenbaum’s lemma we have $\Gamma \cup \{\neg \phi \}\subseteq \Delta $ for some maximal $\sigma $ -consistent set $\Delta $ , and by Proposition 1, $I[\Delta , \lambda ]\vDash \Delta $ . Then since $I[\Delta , \lambda ]\in \mathfrak {I}$ we have $\Gamma \not \vDash _{\mathfrak {I}}\phi $ . So conversely, under our supposition that $\sigma $ is normal for $\lambda $ and $I[\Delta , \lambda ]\in \mathfrak {I}$ for every maximal ${\sigma }$ -consistent set $\Delta $ , we have that $\Gamma \vDash _{\mathfrak {I}}\phi $ implies $\Gamma \vdash _\sigma \phi $ for all $\Gamma $ and $\phi $ .

Theorem 1. $\Gamma \vdash _\sigma \phi \iff \Gamma \vDash _{\mathfrak {I}_\sigma }\phi $ if $\sigma $ is normal for some $\lambda $ .

Proof. ( $\Rightarrow $ ) By the definition of $\mathfrak {I}_\sigma $ . ( $\Leftarrow $ ) For $\Delta $ maximally $\sigma $ -consistent, the canonical model $I[\Delta , \lambda ]\in \mathfrak {I}_\sigma $ because $I[\Delta , \lambda ]\vDash \Delta $ by Proposition 1, and $\sigma \subseteq \Delta $ by the properties of maximal consistent sets. So by Proposition 2, $\Gamma \vDash _{\mathfrak {I}_\sigma }\phi $ implies $\Gamma \vdash _\sigma \phi $ for all $\Gamma ,\phi $ .

Given many logics $\sigma $ , this theorem allows us to find a class of interpretations $\mathfrak {I}_\sigma $ for which said logic is sound and complete. But we often have a class of interpretations $\mathfrak {I}$ not characterised by reference to a particular logic, and we want to find a logic $\lambda $ that will be sound and complete with respect to $\vDash _{\mathfrak {I}}$ . Our general completeness theorem is applicable to such cases. Call $\mathfrak {I}$ definable iff $\mathfrak {I} = \mathfrak {I}_\sigma $ for some logic $\sigma $ . If we can show that a class of interpretations is definable, and identify a logic by which it is defined, completeness follows by the above theorem. This is possible in many natural cases. For example,

$$ \begin{align*} \begin{array}{lll} \text{INT} &= &\text{The class of all interpretations.}\\ \text{CON} &= &\{(M, \Gamma, \lambda):\forall \phi \text{ if } \Gamma\vdash_\lambda\phi \text{ then } \Gamma\not\vdash_\lambda\neg \phi\}\\ \text{FAC} &= &\{(M, \Gamma, \lambda):\forall\phi\text{ if } \Gamma\vdash_\lambda\phi\text{ then }M,\lambda, \Gamma\vDash \phi \}\\ \Box\text{CL} &= &\{(M, \Gamma, \lambda):\forall\phi\text{ if } \Gamma\vdash_\lambda\phi \text{ then }\Gamma\vdash_\lambda\Box\phi\}\\ \Diamond\text{CL} &= &\{(M, \Gamma, \lambda):\forall\phi\text{ if } \Gamma\not\vdash_\lambda\phi \text{ then }\Gamma\vdash_\lambda\neg\Box\phi\} \end{array} \end{align*} $$
$$ \begin{align*} \begin{array}{lll lll} \Box\text{CF} &= &\Box\text{CL } \cap \text{ FAC } \quad &\text{DCF} &=&\Box\text{CL } \cap \Diamond\text{CL } \cap\text{FAC.}\\ \end{array} \end{align*} $$

To find logics that define these classes of interpretations, we recall the familiar axioms

$$ \begin{align*} \begin{array}{ll ll} \text{K} &\Box(p\to q)\to (\Box p\to \Box q) &\text{D} &\Box p\to \neg \Box \neg p\\ \text{T} &\Box p\to p &4 &\Box p\to \Box\Box p\\ 5 &\neg\Box p\to \Box\neg \Box p.&&\\ \end{array} \end{align*} $$

The smallest logic normal for some logic is the smallest logic normal for $PC$ , which I call $\mathsf {K}_0$ . We note that being normal for some logic or other is equivalent to being normal for the smallest logic (PC) and that any extension of a logic normal for $PC$ is itself normal for PC. Therefore for logics, the property of ‘being normal for some logic’ amounts to being an extension of $\mathsf {K}_0$ . $\mathsf {K}_0$ is also the smallest set of formulas closed under modus ponens, containing all substitution instances of K, and containing $\Box \phi $ for all $\phi $ in PC. By concatenating the names of other axioms and adding a ‘ $0$ ’, we name the smallest logic that extends $\mathsf {K}_0$ containing the named axioms. So $\mathsf {KD}_0 = \mathsf {K}_0 + \text {D}$ , and $\mathsf {KD45}_0 = \mathsf {K}_0 + \text {D} + 4 + 5$ . Most of these logics are not normal, except for $\mathsf {KT45_0}=\mathsf {S5}$ (see [Reference Williamson7, p. 110]).Footnote 2 We write $\mathsf {D_0}:=\mathsf {KD}_0$ $\mathsf {T_0}:=\mathsf {KT}_0$ , and $\mathsf {S4_0}:=\mathsf {KT4}_0$ . One obtains the expected normal counterpart for these logics by adding necessitation—for example, $\mathsf {S4_0} + \text {Necessitation} = \mathsf {S4}$ . We make use of the fact that each of the above logics is the smallest set of formulas closed under modus ponens and containing all substitution instances of their axioms.

Proposition 3.

$$ \begin{align*} \begin{array}{llllllll} \displaystyle\mathrm{i.} &\displaystyle\text{INT} = \mathfrak{I}_{\mathsf{K}_0} &\displaystyle\mathrm{ii.} &\displaystyle\text{CON} = \mathfrak{I}_{\mathsf{D}_0} &\displaystyle\mathrm{iii.} &\displaystyle\text{FAC} = \mathfrak{I}_{\mathsf{T}_0} &\displaystyle\mathrm{iv.} &\displaystyle\Box \text{CL} = \mathfrak{I}_{\mathsf{K4}_0}\\ \displaystyle\mathrm{v.} &\displaystyle\Diamond \text{CL} = \mathfrak{I}_{\mathsf{K5}_0} &\displaystyle\mathrm{vi.} &\displaystyle\Box \text{CF} = \mathfrak{I}_{\mathsf{S4}_0} &\displaystyle\mathrm{vii.} &\displaystyle\text{DCF} = \mathfrak{J}_{\mathsf{S5.}} &\\ \end{array} \end{align*} $$

Proof. [i.] It is straightforward that $\vDash \Box (\phi \to \psi )\to (\Box \phi \to \Box \psi )$ for all $\phi ,\psi $ and that $\vDash \Box \phi $ for all $\phi $ in $PC$ . From this, it follows that $I\vDash \mathsf {K}_0$ for all I, so that $\text {INT}\subseteq \mathfrak {I}_{\mathsf {K}_0}$ . Therefore since $\mathfrak {I}_{\mathsf {K}_0}\subseteq \text {INT}$ we have $\text {INT} = \mathfrak {I}_{\mathsf {K}_0}$ . [ii-v.] By the semantic clauses, we can see that (ii) an interpretation I is in CON iff $I\vDash \Box \phi \to \neg \Box \neg \phi $ for all $\phi $ (iii) an interpretation I is in FAC if and only if $I\vDash \Box \phi \to \phi $ for all $\phi $ (iv) and interpretation I is in $\Box $ CL if and only if $I\vDash \Box \phi \to \Box \Box \phi $ for all $\phi $ , and (v) and interpretation I is in $\Diamond $ CL if and only if $I\vDash \neg \Box \phi \to \Box \neg \Box \phi $ for all $\phi $ . Since $I\vDash \mathsf {K_0}$ for all I this means that CON $= \mathfrak {I}_{\mathsf {D}_0}$ , FAC $= \{I:I\vDash \mathsf {T_0}\} = \mathfrak {I}_{\mathsf {T}_0}$ , etc. [vi-vii.] By combinations of the previous arguments.

Corollary 1. (By Theorem 1, Proposition 3) For all $\Gamma $ and $\phi $ ,

$$ \begin{align*} \begin{array}{clcl} \displaystyle\mathrm{i.} &\displaystyle\Gamma \vDash\phi \iff \Gamma\vdash_{\mathsf{K}_0}\phi &\displaystyle\mathrm{ii.} &\displaystyle\Gamma \vDash_{CON}\phi \iff \Gamma\vdash_{\mathsf{D}_0}\phi\\ \displaystyle\mathrm{iii.} &\displaystyle\Gamma \vDash_{FAC}\phi \iff \Gamma\vdash_{\mathsf{T}_0}\phi &\displaystyle\mathrm{iv.} &\displaystyle\Gamma \vDash_{\Box CL}\phi \iff \Gamma\vdash_{\mathsf{K4}_0}\phi\\ \displaystyle\mathrm{v.} &\displaystyle\Gamma \vDash_{\Diamond CL}\phi \iff \Gamma\vdash_{\mathsf{K5}_0}\phi &\displaystyle\mathrm{vi.} &\displaystyle\Gamma \vDash_{\Box CF}\phi \iff \Gamma\vdash_{\mathsf{S4}_0}\phi\\ \displaystyle\mathrm{vii.} &\displaystyle\Gamma \vDash_{DCF}\phi \iff \Gamma\vdash_{\mathsf{S5}}\phi. &\\ \end{array} \end{align*} $$

1.3 Normal logics

Excepting $\mathsf {KD45_0} = \mathsf {S5}$ , these examples suggest that the present framework relates largely to non-normal logics. But normal logics crop up naturally in this setting, which we see next. For $\mathfrak {I}$ a set of interpretations, let $V\mathfrak {I}$ be the set of formulas $\phi $ that are satisfied by all interpretations in $\mathfrak {I}$ —that are valid over the set. Now one can show that where $\mathfrak {I}$ is definable, $V\mathfrak {I}$ is a logic,Footnote 3 and in particular it is the logic sound and complete for the semantic consequence relation $\vDash _{\mathfrak {I}}$ (e.g., $VFAC = \mathsf {T}_0$ ). Second, where $\mathfrak {I}$ is a set of interpretations and $\lambda $ a logic, let $\mathfrak {I}(\lambda )$ be the set of all interpretations $I\in \mathfrak {I}$ that have $\lambda $ for their logic of evaluation. Combining these, $V\mathfrak {I}(\lambda )$ is the set of formulas valid over $\mathfrak {I}(\lambda )$ . Besides the use of ‘V $\mathfrak {I}$ ’ to denote the set of formulas valid over $\mathfrak {I}$ , for definable $\mathfrak {I,}$ we can also think of ‘ $V\mathfrak {I}(\cdot )$ ’ as denoting a map from logics to logics. Maps of the form $V\mathfrak {I}(\cdot )$ have an important feature. To state it, we recall some terminology from lattice theory. The set of logics in our language forms a complete lattice under inclusion, in the sense that every set of logics $\{\lambda _i\}$ has an infimum $\bigcap _i\lambda _i$ and a supremum, denoted $\sum _i\lambda _i$ . A sequence of logics is a map $\lambda (\cdot )$ which takes natural numbers to logics; we write $\lambda (n)$ as $\lambda _n$ . A sequence $(\lambda _n)$ is called directed if and only if for all natural numbers $n, m$ there is some larger k such that $\lambda _n, \lambda _m\subseteq \lambda _k$ . It is straightforward to show that $\sum _i\lambda _i =\bigcup _n\lambda _n$ whenever $(\lambda _n)$ is directed. We call a function f from logics to logics continuous (meaning Scott-continuous) iff for every directed sequence of logics $(\lambda _n)$ , we have $f(\bigcup _n\lambda _n) = \bigcup _n f(\lambda _n)$ .

With this in place, we next show that for any definable $\mathfrak {I}$ , the map $V\mathfrak {I}(\cdot )$ is continuous. This is significant because it means that certain familiar fixed point theorems can be applied to these maps, which in our case will help display the promised connection to normal modal logics. Our result relies on two lemmas, the first about directed sequences of logics and the second about the general behavior of maps $V\mathfrak {I}(\cdot )$ for definable $\mathfrak {I}$ .

Proposition 4. Where $\lambda $ is a logic and $(\lambda _n)$ is directed, $\bigcup _n (\lambda + \Box \lambda _n) = \lambda + \Box \bigcup _n\lambda _n$

Proof. Each $(\lambda +\Box \lambda _k)$ is clearly included in $\lambda +\Box \bigcup _n\lambda _n$ , so $\bigcup _n (\lambda + \Box \lambda _n) \subseteq \lambda + \Box \bigcup _n\lambda _n$ . Conversely, one can see that since $(\lambda _n)$ is directed, so is the sequence $(\lambda +\Box \lambda _n)$ . Hence $\bigcup _n(\lambda +\Box \lambda _n)$ is a logic and includes $\lambda $ and $\Box \bigcup _n\lambda _n$ , so $\lambda + \Box \bigcup _n\lambda _n\subseteq $ $\lambda + \Box \bigcup _n\lambda _n$ .

Proposition 5. For any $\lambda $ and definable $\mathfrak {I}$ , $V\mathfrak {I}(\lambda ) = V\mathfrak {I}+\Box \lambda $ .

Proof Sketch.

( $\supseteq $ ) It is clear that $V\mathfrak {I}\cup \Box \lambda \subseteq V\mathfrak {I}(\lambda )$ , and straightforward to show that $V\mathfrak {I}(\lambda )$ is closed under uniform substitution, and hence a logic.Footnote 4 ( $\subseteq $ ) If ${\phi \not \in (V\mathfrak {I}+\Box \lambda )}$ then by Lindenbaum’s lemma there is a maximal $(V\mathfrak {I}+\Box \lambda )$ -consistent set $\Gamma $ containing $\neg \phi $ . Since $V\mathfrak {I}+\Box \lambda $ contains K and includes $\Box \lambda $ , it is normal for $\lambda $ , and so by Proposition 2, we have $I[\Gamma , \lambda ]\vDash \Gamma $ . So $I[\Gamma , \lambda ]\vDash V\mathfrak {I}$ , so $I[\Gamma , \lambda ]\in \mathfrak {I}(\lambda )$ , and $I[\Gamma , \lambda ]\not \vDash \phi $ , so $\phi \not \in V\mathfrak {I}(\lambda )$ .

Theorem 2. If $\mathfrak {I}$ is definable, then $V\mathfrak {I}(\cdot )$ is a continuous map on the lattice of logics.

Proof. For $(\lambda _n)$ directed, $V\mathfrak {I}(\bigcup _n\lambda _n) = V\mathfrak {I} + \Box \bigcup _n\lambda _n = \bigcup _n(V\mathfrak {I}+\Box \lambda _n) = \bigcup _n V\mathfrak {I}(\lambda _n)$ .

The main consequence of this comes by way of Kleene’s Fixed Point Theorem, which says that a continuous map f on a complete lattice with least element $0$ has a least fixed point, $\text {lfp}(f)$ , which is the supremum of the sequence $(f^n(0))$ . Hence for definable $\mathfrak {I}$ , $V\mathfrak {I}(\cdot )$ has a least fixed point $\text {lfp}(V\mathfrak {I}(\cdot ))$ , that we shall abbreviate to $\text {lfp}(\mathfrak {I})$ , which is the supremum of the sequence $(V\mathfrak {I}^n(PC))$ . In general, this fixed point turns out to be a normal logic.

Proposition 6. If $\mathfrak {I}$ is definable then $lfp({\mathfrak {I}})$ is the smallest normal extension of $V\mathfrak {I}$ .

Proof. From Proposition 5, one can show that for all n, $V\mathfrak {I}^{n+1}(PC) = V\mathfrak {I}^n(PC) + \Box V\mathfrak {I}^n(PC)$ . This makes it clear that $\bigcup _n V\mathfrak {I}^{n}(PC)$ is the least extension of $V\mathfrak {I}$ closed under necessitation. Because $V\mathfrak {I}$ contains K, it is also the least normal extension.

We can see some illustrative examples of the application of this result by applying it to the examples given in Corollary 1 in particular. So, we have the following corollary.

Corollary 2. (By Proposition 6, Corollary 1)

$$ \begin{align*} \begin{array}{llc llc llc} \displaystyle\text{lfp}({\text{INT}})&=&\displaystyle{\mathsf{K}} &\displaystyle\text{lfp}({\text{CON}})&=&\displaystyle{\mathsf{D}}&\displaystyle\text{lfp}({\text{FAC}})&= &\displaystyle{\mathsf{T}}\\ \displaystyle\text{lfp}({\Box\text{CF}})&=&\displaystyle{\mathsf{S4}}&\displaystyle\text{lfp}({D\text{CF}}) &=&\displaystyle{\mathsf{S5.}}&\\ \end{array} \end{align*} $$

Further application of such results is shown in the proof of Theorem 4 below.

1.4 Meta-logics and McKinsey

We conclude our tour of this semantics on a difficult problem that arises in this setting, a partial solution, and a connection to McKinsey’s semantics. Consider the class NA of interpretations in which there are no assumptions. For $I = (M, \varnothing , \lambda )\in \text {NA}(\lambda ),$ we have $I\vDash \Box \phi $ iff $\vdash _\lambda \phi $ , hence in such interpretations, we read the box as expressing membership in $\lambda $ . $V\text {NA}(\lambda )$ is not typically a logic, but we can consider the largest logic included in $V\text {NA}(\lambda )$ , which is the set of formulas $\phi $ such that every substitution instance of $\phi $ is in $V\text {NA}(\lambda )$ . Let us call this logic $\mathfrak {L}(\lambda )$ , or the meta-logic of $\lambda $ . Another way to define $\mathfrak {L}$ which makes less reference to our machinery: a valuation is a map v from formulas to truth-values such that $v(\bot ) = 0$ and $v(\phi \to \psi ) = 1$ iff $v(\phi ) = 0$ or $v(\psi ) = 1$ . For $\lambda $ a logic, a $\lambda $ -valuation is a valuation v such that for all $\phi $ , $v(\Box \phi ) = 1$ iff $\vdash _\lambda \phi $ . For any $\lambda $ , $\mathfrak {L}(\lambda )$ is the set of formulas $\phi $ such that every substitution instance of $\phi $ is assigned the value $1$ by every $\lambda $ -valuation. $\mathfrak {L}(\lambda )$ is the logic of the phrase ‘it is a theorem of $\lambda $ that…’. A fixed point of $\mathfrak {L}$ is a logic which is its own meta-logic. Remarkably, there are such logics—which we show in a moment—and so naturally we want to provide a characterisation of them. So call a valuation v McKinsey iff for all $\phi $ , $v(\Box \phi ) = 1$ just in case $v(\phi ') = 1$ for all substitution instances $\phi '$ of $\phi $ . These valuations populate the modal semantics of McKinsey [Reference McKinsey5]. Our main result is as follows.

Theorem 3. $\lambda = \mathfrak {L}(\lambda )$ if and only if all $\lambda $ -valuations are McKinsey.

This provides a characterisation of the fixed points of $\mathfrak {L}$ by way of a connection to McKinsey’s semantics, providing some nice corollaries. For instance, McKinsey [Reference McKinsey5] establishes that the set of formulas true for all McKinsey valuations is a normal extension of $\mathsf {S4M}$ , so, we have the following corollary.

Corollary 3. If $\lambda =\mathfrak {L}(\lambda )$ then $\lambda $ is a normal extension of $\mathsf {S4M}$ .

This can be established independently of Theorem 3 by reasoning directly about $\mathfrak {L}$ . For example, it is straightforward that every fixed point of $\mathfrak {L}$ is normal, since every $\lambda $ -valuation for every logic $\lambda $ validates every substitution instance of the K axiom, and every $\lambda = \mathfrak {L}(\lambda )$ is closed under necessitation since $\phi \in \lambda $ always implies $\Box \phi \in \mathfrak {L}(\lambda )$ . This independence is important to note as the proof of Theorem 3 makes use of the following lemma.

Proposition 7. Suppose that v and $v'$ are $\lambda $ -valuations, where $\lambda $ is normal. Wherever $v'(\phi )=0$ there exists a substitution instance $\phi '$ of $\phi $ such that $v(\phi ')=0$ .

Proof. We begin by defining s as the desired substitution, mapping $\phi $ to $\phi '$ . Let $s(p) = p$ if $v(p) = v'(p)$ and $s(p) = \neg p$ otherwise. Define $s(\phi \to \psi ) = (s\phi \to s\psi )$ and $s(\Box \phi ) = \Box (s\phi )$ and $s\bot = \bot $ . For all $\phi $ , $s\phi $ is a substitution instance of $\phi $ . By induction on formula complexity, we show that $v(s\phi ) =1\iff v'(\phi )=1$ for all $\phi $ , from which the proposition follows. We give the step for $\Box $ ; the other steps are routine. For any $\phi $ , since $s\phi $ is a substitution instance of $\phi $ , $\vdash _\lambda \phi $ implies $\vdash _\lambda s\phi $ . Since $s(s\phi )$ differs from $\phi $ only in that some sentence letters are replaced by their double negations, if $\lambda $ is normal then $\vdash _\lambda s\phi $ implies $\vdash _\lambda s(s\phi )$ and so $\vdash _\lambda \phi $ . So $v(s(\Box \phi ))=1$ iff $\vdash _\lambda s\phi $ iff $\vdash _\lambda \phi $ iff $v'(\Box \phi )=1$ .

Proof of Theorem 3.

( $\Leftarrow $ ) Suppose that every $\lambda $ -valuation is McKinsey. If $\phi \in \lambda $ , then for all $\lambda $ -valuations v and for all substitution instances $\phi '$ of $\phi $ , $v(\phi ') = 1$ , meaning $\phi \in \mathfrak {L}(\lambda )$ . So $\lambda \subseteq \mathfrak {L}(\lambda )$ . Conversely, if $\phi \in \mathfrak {L}(\lambda )$ then for all substitution instances $\phi '$ of $\phi $ and all $\lambda $ -valuations v we have $v(\phi ') = 1$ ; since each such v is McKinsey, we have $v(\Box \phi ) = 1$ and hence $\phi \in \lambda $ . Hence $\lambda \supseteq \mathfrak {L}(\lambda )$ . So $\lambda = \mathfrak {L}(\lambda )$ . ( $\Rightarrow $ ) Suppose that $\lambda = \mathfrak {L}(\lambda )$ , and let v be a $\lambda $ -valuation. For any $\phi $ , if $v(\Box \phi ) =1$ then $\vdash _{\lambda = \mathfrak {L}(\lambda )}\phi $ , meaning $\vdash _{\mathfrak {L}(\lambda )}\phi '$ and hence $v(\phi ')=1$ for all substitution instances $\phi '$ of $\phi $ . Conversely, if $v(\Box \phi ) = 0$ then $\not \vdash _{\mathfrak {L}(\lambda )}\phi $ , meaning that there is some $\lambda $ -valuation $v'$ and substitution instance $\phi '$ of $\phi $ such that $v'(\phi ')=0$ . Since every fixed point of $\mathfrak {L}$ is normal as mentioned, by Proposition 7, there is a substitution instance $\phi "$ of $\phi '$ (hence also of $\phi $ ) such that $v(\phi ")=0$ . So v is McKinsey.

Bacon and Fine [Reference Bacon, Fine, Ferrari, Brendel, Carrara, Hjortland, Sagi, Sher and Steinberger2, proposition 11] have shown all $\mathsf {Med}$ -valuations are McKinsey, where $\mathsf {Med}$ is the modal logic of Medvedev frames in Kripke semantics.Footnote 5 Hence it follows that at least one fixed point exists. Yet whether there are others seems currently out of reach. If there are others then by our findings this would establish a 50-year-old conjecture due to Friedman [Reference Friedman4, problems 41 and 42]. To see this, we note that Bacon and Fine have proposed a Uniqueness Conjecture, which says that for any set of sentence letters M, there is a unique McKinsey valuation v such that $v(p)=1$ iff $p \in M$ for all letters p. This amounts to the claim that McKinsey’s semantic clause succeeds in providing determinate truth-conditions for propositional languages; it also amounts to the negation of Friedman’s conjecture. If there is a fixed point of $\mathfrak {L}$ besides $\mathsf {Med}$ , call it $\lambda $ , then the Uniqueness Conjecture is false, since for any set of sentence letters $M,$ there will be at least two McKinsey valuations v, one a $\mathsf {Med}$ -valuation and the other a $\lambda $ -valuation, such that $v(p)=1\iff p\in M$ for all p. This connection suggests the question of further fixed points of $\mathfrak {L}$ will be hard to resolve.

2 An application.

2.1 Humeanism.

We consider a directly philosophical application of the semantics, which is the investigation of forms of modal reduction that equate necessity with derivability from some assumptions in some system of logic. The best known theory of this form is Sider’s modal Humeanism [Reference Sider6, chap. 12]. For the Humean, a statement $\phi $ is necessary if and only if it is deducible from a conventionally decided upon set of truths called ‘modal axioms’ by a conventionally decided upon set of truth-preserving inference rules called ‘modal rules’. In our terminology, where for ‘modal axioms’ we say ‘assumptions’ and for the set of ‘modal rules’ we say ‘the logic of evaluation’, Sider’s view is that $\Box \phi $ is true if and only if $\Gamma \vdash _\lambda \phi $ , where $\Gamma $ is a set of truths and and $\lambda $ a logic whose consequence relation is truth-preserving, both decided upon by convention. This appears to allow the Humean extreme liberty with respect to modal logic—something Sider emphasizes as an improvement on earlier forms of conventionalism. One asks: which modal principles are true, and which modal inference rules truth-preserving? The Humean says to take your pick. For any consistent modal logic $\sigma $ that extends $\mathsf {T}_0$ , and where M is the set of all atomic sentences that are true on some canonical reading, it is easy to pick some $\Gamma , \lambda \subseteq \sigma $ such that $(M, \Gamma , \lambda )\vDash \sigma $ so that, assuming Humeanism, all of the theorems of $\sigma $ will be made true.Footnote 6

But things are not as they seem. This line of reasoning stands or falls upon the assumption that any $\Gamma $ and $\lambda $ can be selected to serve as our set of assumptions and logic of evaluation—provided that they are then satisfied by the resulting interpretation $(M, \Gamma , \lambda )$ . But as I read the account, Sider seems committed to something stronger: that modal axioms must be true prior to their being recruited into service as modal axioms, and that modal inference rules must likewise be truth-preserving prior to their being incorporated into our set of modal rules. To require less than this would be to allow for a kind of bootstrapping, or more specifically, a kind of truth by convention in which the statements in $\Gamma $ become true, and the rules encoded in $\lambda $ become truth-preserving, because we assume them. Sider rejects the idea that a statement can be made true by fiat in this way as the key error of older forms of conventionalism, to which Humeanism is an intended successor [Reference Sider6, p. 268]. If this is all correct, the Humean should require that $\Gamma $ and $\lambda $ only contain principles that are true prior to our particular selection of modal axioms and rules. But this is quite a severe restriction. For example, it seems to imply that $\Gamma $ cannot contain any modal statements (in which a $\Box $ occurs) since modal claims are, according to the Humean, not true prior to a conventional choice of axioms and rules. It also seems to imply that $\lambda $ cannot contain anything other than principles valid from non-modal logic, for the same reason.Footnote 7 Therefore in the context of propositional logic, only interpretations of the form $I = (M, \Gamma , \lambda )$ , where $\Gamma $ contains no modal formulas and $\lambda =PC$ , seem to be admissible, in the sense that modal discourse allows us to select $\Gamma $ for our set of modal axioms and $\lambda $ for the logic consisting of our accepted modal rules. This severe restriction means that many modal principles cannot be made true by any acceptable conventional choice of assumptions and logic of evaluation. We use $\Box p\to \Box \Box p$ as an example. Suppose $\Gamma $ contains no modal formulas and that $(M, \Gamma , PC)\vDash \Box \phi \to \Box \Box \phi $ for all $\phi $ . Then, since $\Gamma \vdash _{PC}\top $ it follows that $\Gamma \vdash _{PC}\Box \top $ . But this is so only if $\Gamma $ contains some non-modal contradiction, meaning that $(M, \Gamma , PC)\not \vDash \Gamma $ , so that $\Gamma $ does not meet the requirement that assumptions be true. So no interpretation which meets the Humean’s restrictions validates all substitution instances of $\Box p\to \Box \Box p$ —or even $\Box \Box \top $ for that matter, which is already a theorem of $\mathsf {K}$ . This is indicative; things are no better for $\neg \Box p\to \Box \neg \Box p$ , for example. The upshot is that Humeanism properly understood is no less restrictive with respect to modal logic than older conventionalisms.

2.2 Procedural conventionalism

I want to suggest a fix for the Humean. I will call the view which results from adopting the fix ‘proceduralism’. Proceduralism departs from Humeanism concerning the way in which the set of assumptions and the logic of evaluation arise. For the Humean, convention works by fixing the set of assumptions and logic of evaluation directly. For the proceduralist, convention works by selecting what we will call an admissible procedure, which then goes on to construct the logic of evaluation and set of assumptions in stages. Each stage of the construction conforms to the stricture against bootstrapping raised earlier, adding to the assumption set and the logic of evaluation only statements that were already true in earlier stages of the construction. But because the process takes place in multiple stages, the end result of such a process can be a logic of evaluation and assumption set which each contain properly modal principles. In this way, the proceduralist hopes to regain some freedom with respect to modal logic while submitting to the same philosophical strictures which precluded such freedom for the Humean.

Some formal definitions will clarify the idea. A procedure shall be any map from interpretations to interpretations $(M, \Gamma , \lambda )\mapsto (M, \Gamma ', \lambda ')$ which modifies the set of assumptions $\Gamma $ and the logic of evaluation $\lambda $ while leaving the set of sentence letters M alone. We can apply any procedure $\mathfrak {a}$ to any interpretation I as many times as we like, generating a sequence of interpretations $I, \mathfrak {a}I, \mathfrak {aa}I,$ and so on. We are interested in such sequences where the original interpretation I is of the form $(M, \varnothing , \text {PC})$ , having no assumptions and nothing in the logic of evaluation save what is given in non-modal logic; interpretations of this form will be called ground interpretations. So we confine attention to procedures $\mathfrak {a}$ such that for any ground interpretation I, the sequence $I, \mathfrak {a}I, \mathfrak {aa}I,\ldots $ evolves in an acceptable way, converging towards and terminating at a natural limit. More precisely, an admissible extension of an interpretation $(M, \Gamma , \lambda )$ shall be an interpretation $(M, \Gamma ', \lambda ')$ with the same set of sentence letters such that $\Gamma \subseteq \Gamma '$ and $\lambda \subseteq \lambda '$ and $(M, \Gamma , \lambda )\vDash \Gamma '\cup \lambda '$ , so that the new assumptions and logical principles in the extended interpretation are already true in the original interpretation. An admissible procedure must first meet the requirement that (i) for all ground interpretations I and all natural numbers n, $\mathfrak {a}^{n+1}I$ is an admissible extension of $\mathfrak {a}^n I$ . This is what is meant by the sequence evolving in an acceptable way. This first requirement ensures that each $\mathfrak {a}^n I$ is factive, since only factive interpretations have admissible extensions. It also ensures that the sequence converges to a natural limit: where I is a ground interpretation and each $\mathfrak {a}^nI = (M, \Gamma _n, \lambda _n)$ , let $I^{\mathfrak {a}}:= (M, \bigcup _n\Gamma _n, \bigcup _n\lambda _n)$ . We want for this limit to also be factive, and we want it to be an end-point for the procedure, meaning that applying the same procedure further has no result. So we require that (ii) the limit $I^{\mathfrak {a}}$ is factive and a fixed point of $\mathfrak {a}$ . Summing up, a procedure is called admissible if and only if it satisfies (i) and (ii). A set of formulas $\Gamma $ is admissible iff there exists an admissible procedure $\mathfrak {a}$ such that $I^{\mathfrak {a}}\vDash \Gamma $ for all ground interpretations I.

Here is how these definitions fit into the picture. As said before, the proceduralist thinks that the way in which we arrive at our set of assumptions and our logic of evaluation (what Sider calls modal axioms and modal rules) is by adopting a conventional procedure. When we adopt a procedure, the assumption set and the logic of evaluation is generated in stages by repeated application of the procedure to the underlying ground interpretation. So if M is the set of all sentence letters that are really true under some canonical reading, then our ground interpretation is $I = (M, \varnothing , \text {PC})$ , and upon conventionally adopting the admissible procedure $\mathfrak {a}$ , the interpretation $I^{\mathfrak {a}} = (M, \Gamma , \lambda )$ now models our conversational context, in the sense that $\Gamma $ is our set of assumptions and $\lambda $ is our logic of evaluation. In selecting a procedure, we are restricted precisely to admissible procedures. This is because it is precisely such procedures which will build upon our ground interpretation in an acceptable way, via a sequence of admissible extensions and taking of unions: at each stage of the process of extension, adding only assumptions and logical principles that are true in prior stages. It is in this way that the proceduralist submits to the same stricture against bootstrapping imposed on the Humean. At the same time, as said above, they wish to recover some of the logical freedom which we earlier denied to the Humean on the basis of such strictures. The concept of admissibility sharpens this wish, because an admissible set of formulas is a set of formulas whose truth can be ensured by adopting some procedure. The proceduralist’s task now becomes that of establishing admissibility for larger and larger sets of formulas. Here is a particularly simple example.

Theorem 4. $\mathsf {K}$ and $\mathsf {S4}_0$ are admissible.Footnote 8

Proof. We show that $\mathfrak {a}(M, \Gamma , \lambda ) := (M, \Gamma , V\text {INT}(\lambda ))$ is admissible. Note that for any M we have $\mathfrak {a}^n(M, \varnothing , \text {PC})= (M, \varnothing , V\text {INT}^n(\text {PC}))$ . Each $\mathfrak {a}^n(M, \varnothing , \text {PC})\vDash V\text {INT}(V\text {INT}^n(\text {PC})) = V\text {INT}^{n+1}(\text {PC})$ by definition of $V\text {INT}(\cdot )$ , so that each $\mathfrak {a}^{n+1}I$ is an admissible extension of $\mathfrak {a}^{n}I$ . So $\mathfrak {a}$ fulfils requirement (i) of being admissible. Moreover $(M, \varnothing , \text {PC})^{\mathfrak {a}}$ is defined for all M, and by Theorem 2, Kleene’s Fixed Point Theorem, and Corollary 2, we have

$$ \begin{align*} (M, \varnothing, \text{PC})^{\mathfrak{a}} =(M, \varnothing, \bigcup_{n} V\text{INT}^n(\text{PC})) = (M, \varnothing, \text{lfp}(\text{INT})) = (M, \varnothing, \mathsf{K}). \end{align*} $$

Because $\mathsf {K} = \text {lfp}(\text {INT})$ is a fixed point of $V\text {INT}(\cdot )$ , it follows that $I^{\mathfrak {a}}$ is factive and a fixed point of $\mathfrak {a}$ for any ground interpretation I. So $\mathfrak {a}$ fulfils requirement (ii) for admissibility. Now for any ground interpretation I, since $I^{\mathfrak {a}} = (M, \varnothing , \mathsf {K})$ is factive, we have $I^{\mathfrak {a}}\vDash \mathsf {T}_0 = V\text {FAC}$ . But additionally, for any formula $\phi ,$ we have $I^{\mathfrak {a}}\vDash \Box \phi \to \Box \Box \phi $ because, by normality, $\vdash _{\mathsf {K}}\phi $ implies $\vdash _{\mathsf {K}}\Box \phi $ . So $I^{\mathfrak {a}}\vDash \mathsf {S4}_0$ . Because $I^{\mathfrak {a}}$ is factive and has $\mathsf {K}$ for its logic of evaluation, we have $I^{\mathfrak {a}}\vDash \mathsf {K}$ . Since I was arbitrary, this holds for all ground interpretations. So both $\mathsf {S4}_0$ and $\mathsf {K}$ are admissible.

This suffices to show that the proceduralist makes a definite improvement over the Humean, for whom $\Box p\to \Box \Box p$ and general principles for normal logics like $\Box \ldots \Box \top $ were out of reach. It additionally demonstrates the relevance of results established in §1 to the philosophical task of working out proceduralism. This makes for a natural place to stop then, as a full treatment of admissibility is beyond our present scope.

Footnotes

1 This paper has benefited from conversation with A. C. Paseau, Timothy Williamson, Kit Fine, and Volker Halbach, from comments from faculty at the University of Reading in 2020, from students at Oxford who attended my lectures on ‘Topics in Modal Logic’ in 2022, and from many anonymous referees.

2 For example, $\mathsf {KT4}_0 = \mathsf {S4}_0$ is non-normal; see footnote attached to Theorem 4 for a proof of this.

3 This point requires explanation. Every $V\mathfrak {I}$ is closed under modus ponens and contains all tautologies. But if $\mathfrak {I}$ is definable, then $V\mathfrak {I}$ is also closed under uniform substitution. The proof idea is as follows. If s is a map from sentence letters to formulas and $\phi ^s$ is the substitution instance of $\phi $ which results by applying s to it in the usual way, then for any interpretation $I= (M, \Gamma , \lambda )$ we define $I^s = (\{p:I\vDash p^s\}, \{\phi : I\vDash (\Box \phi )^s\}, \lambda )$ . By induction on formula complexity, one can show that always have $I^s\vDash \phi \iff I\vDash \phi ^s$ . Now if $\mathfrak {I}$ is definable, then one can verify that it is closed under this construction, meaning that $I\in \mathfrak {I}$ implies $I^s\in \mathfrak {I}$ for all I and s. Therefore if $\phi ^s\not \in V\mathfrak {I}$ then $\phi \not \in V\mathfrak {I}$ , since if there is some I in $\mathfrak {I}$ with $I\not \vDash \phi ^s$ then $I^s\not \vDash \phi $ . Since s was arbitrary, it follows after contraposition that $\phi \in V\mathfrak {I}$ implies $\phi '\in V\mathfrak {I}$ for all substitution instances $\phi '$ of $\phi $ .

4 For example, use the construction of the previous footnote and show that $\mathfrak {I}(\lambda )$ is closed under the map $I\mapsto I^s$ for any s.

5 A Kripke frame $(W, R)$ is Medvedev iff isomorphic to a frame $(\{Y\subseteq X:Y\neq \varnothing \}, \supseteq )$ for finite X [Reference Bacon, Fine, Weiss and Birman1, p. 11].

6 I emphasize that $\Gamma , \lambda \subseteq \sigma $ since then $(M, \Gamma , \lambda )\vDash \Gamma $ and $(M, \Gamma , \lambda )\vDash \lambda $ , meaning that $\Gamma $ is all true and $\lambda $ is truth-preserving, which the Humean requires. This comes by Proposition 1. Take some maximal $\sigma $ -consistent set $\Theta $ with $M = \text {At}\cap \Theta $ and then pick $\Gamma = \Box ^{-}\Theta $ and $\lambda = PC$ . Then since $\sigma \supseteq \mathsf {T_0}$ , we have $I\vDash \sigma \cup \Gamma \cup \lambda $ .

7 Perhaps some principles not in $PC$ , like $\Box p\to p$ and $\Box (p\to q)\to (\Box p\to \Box q)$ , can be in the logic of evaluation since they are true no matter which modal axioms and rules one picks, and so are true ‘independently’ of what we choose, even if not prior to our making some choice. At best, such an argument will provide a logic of evaluation contained in $\mathsf {T}_0$ , since such an argument does not even work for to $\Box (\Box p\to p)$ . This does not help much, since the argument below about the unavailability of $\Box p\to \Box \Box p$ goes through for all $\lambda \subseteq \mathsf {T_0}$ . And while, if the logic of evaluation is $\mathsf {T}_0$ , then $\Box \Box \top $ is made true, longer iterations like $\Box \Box \Box \top $ are still not.

8 Note that this particular construction cannot be used to show that $\mathsf {S4}$ is admissible, since for any ground interpretation $I = (M, \varnothing , \text {PC}),$ we will have $I^{\mathfrak {a}}= (M, \varnothing , \mathsf {K})\not \vDash \Box (\Box p\to \Box \Box p)$ given that $\not \vdash _{\mathsf {K}}\Box p\to \Box \Box p$ . This point entails that the logic $\mathsf {S4}_0$ is non-normal and distinct from $\mathsf {S4}$ , substantiating a claim made earlier.

References

Bacon, A., & Fine, K. (2024). The logic of logical necessity. In Weiss, Y., and Birman, R., editors. Saul Kripke on Modal Logic. Cham: Springer Verlag, pp. 4392.Google Scholar
Bacon, A., & Fine, K. Logical necessity. In Ferrari, F., Brendel, E., Carrara, M., Hjortland, O., Sagi, G., Sher, G., and Steinberger, F., editors. Oxford Handbook of Philosophy of Logic. Oxford: Oxford University Press. Forthcoming.Google Scholar
Carnap, R. (1977). Meaning and Necessity. Chicago: University of Chicago Press.Google Scholar
Friedman, H. (1975). One hundred and two problems in mathematical logic. The Journal of Symbolic Logic, 40(2), 113129.Google Scholar
McKinsey, J. C. C. (1945). Syntactical construction of systems of modal logic. The Journal of Symbolic Logic, 10(3), 8394.Google Scholar
Sider, T. (2011). Writing the Book of the World. Oxford: Oxford University Press.Google Scholar
Williamson, T. (2013). Modal Logic as Metaphysics. Oxford: Oxford University Press.Google Scholar