Quilty-Dunn et al. defend the claim that, roughly, mentation generally traffics in language-of-thought (LoT)-type structures. Classically, such evidence has been drawn from considerations of human language understanding. If that were the best (or indeed only) evidence for the claim, though, it would be relatively easy for an LoT skeptic to dismiss it. Quilty-Dunn et al. therefore focus on evidence from nonlinguistic domains, where signs of language-like representations are all the more striking. Although the ease of connecting the dots between natural language and LoT-type structures makes the study of linguistic cognition a poor choice for arguing that minds generally implement some sort of LoT-type representations, we submit that it also makes human language a particularly fruitful domain for formulating and defending specific hypotheses about candidate LoT representations. And indeed, Quilty-Dunn et al. say little about this beyond the useful identification of general properties of LoTs; nor do they say specifically how “logicality” should be understood in the context of potentially significantly modular minds. In this note, we introduce into this discussion a recent strand of theoretical and experimental work on (broadly) logical expressions in linguistic semantics that directly bears on these matters.
First, let us suppose along with Quilty-Dunn et al. that believing something consists, at least in part, of entertaining a particular LoT representation (Field, Reference Field1978). Then, a natural way to explain the finding that, for example, “[t]elling participants… they will see a pairing of a group with pictures of pleasant (or unpleasant) things is much more effective at fixing implicit attitudes than repeatedly pairing the group and the pleasant/unpleasant things” (target article, sect. 6.2, para. 5), is to suppose that the outputs of (specifically linguistic) language comprehension simply are LoT expressions (e.g., Hunter & Wellwood, Reference Hunter and Wellwood2023; Wellwood, Reference Wellwood2020). On the contrary, the pathway from associationistic learning episodes to such representations is rather less direct. Indeed, this view on adult linguistic understanding pairs well with views about language acquisition that presuppose human beings come equipped with a shared conceptual system; on such views, learning the meanings of words involves solving a mapping problem rather than a concept acquisition problem (Gleitman, Reference Gleitman1990).
Second, if beliefs in the relevant sense are syntactically structured objects, then the relevant theories should say something about their structure. We understand this to be a question in the spirit of the distinction between the computational- versus algorithmic-level (Marr, Reference Marr1982). Quilty-Dunn et al. cite work implementing probabilistic languages-of-thought (PLoTs) which is taken to “provide[] defeasible evidence that some approximation of the computational elements of the model are realized in human cognitive architecture,” but, as they note, “further evidence is needed to establish” its “algorithmic-level reality” (target article, sect. 3, note 4).
Recent research in “psychosemantics” explicitly aims to approach these lower levels of abstraction – at least down to “Level 1.5” (Peacocke, Reference Peacocke1986). It begins by noting the possibility of specifying boundlessly many truth-conditionally equivalent, but intensionally distinct (Church, Reference Church1941) characterizations of the meaning of sentences like Most of the Cs are Bs and There are more As than Bs with the putatively logical items most or more, and asks which best correspond to how people represent them. Specifically, given a formal characterization φ of the meaning of sentence S, we can probe whether people are biased to make use of the information explicitly called for in φ. For example, the two expressions “|C & B| > |C \ B|” and “|C & B| > |C| − |C & B|” equally well capture the truth-conditions of Most of the Cs are Bs, but the former calls for the cardinality of C \ B (i.e., the Cs which are not Bs) while the latter calls for the cardinality of C. Carefully manipulating which perceptual-cognitive information is readily available in a task where subjects must evaluate the truth of S, one can check the resulting fit between what people draw on when evaluating S and what φ calls for.
The most striking findings of this research are that English speakers are indeed remarkably uniform in the information they recruit to evaluate a sentence like Most As are Bs, and they are pretty stubborn in those preferences even when other strategies are readily available (Hackl, Reference Hackl2009), more accurate (Pietroski, Lidz, Hunter, & Halberda, Reference Pietroski, Lidz, Hunter and Halberda2009), or more extensible (Lidz, Pietroski, Halberda, & Hunter, Reference Lidz, Pietroski, Halberda and Hunter2011). Furthermore, these findings replicate for speakers evaluating translational equivalents in Polish (Tomaszewicz, Reference Tomaszewicz, Reich, Horch and Pauly2011), and they are systematically different from those recruited for sentences with more under extensionally equivalent circumstances (e.g., Knowlton et al., Reference Knowlton, Hunter, Odic, Wellwood, Halberda, Pietroski and Lidz2021).
Finally, Quilty-Dunn et al. cite evidence for logical reasoning as evidence for LoT, but we are not told what it means for a mind to “use logical operators” (target article, sect. 5.2, para. 1) which are “generalizab[le] across content domains” (target article, sect. 1, para. 7). For the latter, it must be that LoT (sub-)expressions can interface with distinct and potentially modular systems, but this raises the question of what ties together the various domain-specific interpretations of a single LoT expression. To concretize the problem, suppose that the end result of understanding There are more apples than bananas is the LoT expression “A > B” and that of There is more sand than mud is “S > M.” Specifically, what's required is a specification of (i) how the single symbol “>” (Wellwood, Reference Wellwood2019) can be interpreted by cognitive systems operating both over domains representing pluralities of objects and those representing stuff (Odic, Pietroski, Hunter, Lidz, & Halberda, Reference Odic, Pietroski, Hunter, Lidz and Halberda2012; see Rips & Hespos, Reference Rips and Hespos2015), and (ii) the logical relationships that an LoT expression enters into by virtue of being built around this symbol in a certain way (e.g., via proof-theoretic inference rules). We have suggested (Hunter & Wellwood, Reference Hunter and Wellwood2023) that for a nonlinguistic system to interpret a symbol such as “>” in the appropriate way is exactly for this interpretation to abide by some algebraic laws that are, in effect, specified by some inference rules governing expressions built out of “>.” For example, a rule licensing a logical inference from “A > B” and “B > C” to “A > C” essentially specifies that, in each content domain where “>” has an interpretation, that interpretation must correspond to a transitive binary relation.
Quilty-Dunn et al. defend the claim that, roughly, mentation generally traffics in language-of-thought (LoT)-type structures. Classically, such evidence has been drawn from considerations of human language understanding. If that were the best (or indeed only) evidence for the claim, though, it would be relatively easy for an LoT skeptic to dismiss it. Quilty-Dunn et al. therefore focus on evidence from nonlinguistic domains, where signs of language-like representations are all the more striking. Although the ease of connecting the dots between natural language and LoT-type structures makes the study of linguistic cognition a poor choice for arguing that minds generally implement some sort of LoT-type representations, we submit that it also makes human language a particularly fruitful domain for formulating and defending specific hypotheses about candidate LoT representations. And indeed, Quilty-Dunn et al. say little about this beyond the useful identification of general properties of LoTs; nor do they say specifically how “logicality” should be understood in the context of potentially significantly modular minds. In this note, we introduce into this discussion a recent strand of theoretical and experimental work on (broadly) logical expressions in linguistic semantics that directly bears on these matters.
First, let us suppose along with Quilty-Dunn et al. that believing something consists, at least in part, of entertaining a particular LoT representation (Field, Reference Field1978). Then, a natural way to explain the finding that, for example, “[t]elling participants… they will see a pairing of a group with pictures of pleasant (or unpleasant) things is much more effective at fixing implicit attitudes than repeatedly pairing the group and the pleasant/unpleasant things” (target article, sect. 6.2, para. 5), is to suppose that the outputs of (specifically linguistic) language comprehension simply are LoT expressions (e.g., Hunter & Wellwood, Reference Hunter and Wellwood2023; Wellwood, Reference Wellwood2020). On the contrary, the pathway from associationistic learning episodes to such representations is rather less direct. Indeed, this view on adult linguistic understanding pairs well with views about language acquisition that presuppose human beings come equipped with a shared conceptual system; on such views, learning the meanings of words involves solving a mapping problem rather than a concept acquisition problem (Gleitman, Reference Gleitman1990).
Second, if beliefs in the relevant sense are syntactically structured objects, then the relevant theories should say something about their structure. We understand this to be a question in the spirit of the distinction between the computational- versus algorithmic-level (Marr, Reference Marr1982). Quilty-Dunn et al. cite work implementing probabilistic languages-of-thought (PLoTs) which is taken to “provide[] defeasible evidence that some approximation of the computational elements of the model are realized in human cognitive architecture,” but, as they note, “further evidence is needed to establish” its “algorithmic-level reality” (target article, sect. 3, note 4).
Recent research in “psychosemantics” explicitly aims to approach these lower levels of abstraction – at least down to “Level 1.5” (Peacocke, Reference Peacocke1986). It begins by noting the possibility of specifying boundlessly many truth-conditionally equivalent, but intensionally distinct (Church, Reference Church1941) characterizations of the meaning of sentences like Most of the Cs are Bs and There are more As than Bs with the putatively logical items most or more, and asks which best correspond to how people represent them. Specifically, given a formal characterization φ of the meaning of sentence S, we can probe whether people are biased to make use of the information explicitly called for in φ. For example, the two expressions “|C & B| > |C \ B|” and “|C & B| > |C| − |C & B|” equally well capture the truth-conditions of Most of the Cs are Bs, but the former calls for the cardinality of C \ B (i.e., the Cs which are not Bs) while the latter calls for the cardinality of C. Carefully manipulating which perceptual-cognitive information is readily available in a task where subjects must evaluate the truth of S, one can check the resulting fit between what people draw on when evaluating S and what φ calls for.
The most striking findings of this research are that English speakers are indeed remarkably uniform in the information they recruit to evaluate a sentence like Most As are Bs, and they are pretty stubborn in those preferences even when other strategies are readily available (Hackl, Reference Hackl2009), more accurate (Pietroski, Lidz, Hunter, & Halberda, Reference Pietroski, Lidz, Hunter and Halberda2009), or more extensible (Lidz, Pietroski, Halberda, & Hunter, Reference Lidz, Pietroski, Halberda and Hunter2011). Furthermore, these findings replicate for speakers evaluating translational equivalents in Polish (Tomaszewicz, Reference Tomaszewicz, Reich, Horch and Pauly2011), and they are systematically different from those recruited for sentences with more under extensionally equivalent circumstances (e.g., Knowlton et al., Reference Knowlton, Hunter, Odic, Wellwood, Halberda, Pietroski and Lidz2021).
Finally, Quilty-Dunn et al. cite evidence for logical reasoning as evidence for LoT, but we are not told what it means for a mind to “use logical operators” (target article, sect. 5.2, para. 1) which are “generalizab[le] across content domains” (target article, sect. 1, para. 7). For the latter, it must be that LoT (sub-)expressions can interface with distinct and potentially modular systems, but this raises the question of what ties together the various domain-specific interpretations of a single LoT expression. To concretize the problem, suppose that the end result of understanding There are more apples than bananas is the LoT expression “A > B” and that of There is more sand than mud is “S > M.” Specifically, what's required is a specification of (i) how the single symbol “>” (Wellwood, Reference Wellwood2019) can be interpreted by cognitive systems operating both over domains representing pluralities of objects and those representing stuff (Odic, Pietroski, Hunter, Lidz, & Halberda, Reference Odic, Pietroski, Hunter, Lidz and Halberda2012; see Rips & Hespos, Reference Rips and Hespos2015), and (ii) the logical relationships that an LoT expression enters into by virtue of being built around this symbol in a certain way (e.g., via proof-theoretic inference rules). We have suggested (Hunter & Wellwood, Reference Hunter and Wellwood2023) that for a nonlinguistic system to interpret a symbol such as “>” in the appropriate way is exactly for this interpretation to abide by some algebraic laws that are, in effect, specified by some inference rules governing expressions built out of “>.” For example, a rule licensing a logical inference from “A > B” and “B > C” to “A > C” essentially specifies that, in each content domain where “>” has an interpretation, that interpretation must correspond to a transitive binary relation.
Financial support
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.
Competing interest
None.