Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-29T03:14:51.188Z Has data issue: false hasContentIssue false

Language-of-thought hypothesis: Wrong, but sometimes useful?

Published online by Cambridge University Press:  28 September 2023

Adina L. Roskies
Affiliation:
Department of Philosophy, Dartmouth College, Hanover, NH, USA adina.roskies@dartmouth.edu, https://faculty-directory.dartmouth.edu/adina-l-roskies
Colin Allen
Affiliation:
Department of History & Philosophy of Science, University of Pittsburgh, Pittsburgh, PA, USA colin.allen@pitt.edu https://www.hps.pitt.edu/people/colin-allen

Abstract

Quilty-Dunn et al. maintain that language-of-thought hypothesis (LoTH) is the best game in town. We counter that LoTH is merely one source of models – always wrong, sometimes useful. Their reasons for liking LoTH are compatible with the view that LoTH provides a sometimes pragmatically useful level of abstraction over processes and mechanisms that fail to fully live up to LoT requirements.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Quilty-Dunn et al. ask the question “What is the format of thought?” (target article, sect. 1, para. 1). Their answer is that it is language-like. Despite the title of their paper, and their contention in the introduction that “LoTH is the best game in town” (italics in original; target article, sect. 1, para. 6), Quilty-Dunn et al. don't explicitly argue for this superlative conclusion except with respect to one study of human concept learning, discussed in section 3. Rather, they show how to interpret various studies of human and animal cognition as involving representational formats that have (most of) the sixfold homeostatic property cluster they use to characterize language-of-thought (LoT). We concede that models using LoT-formatted representations may be very useful, not just for human concept learning but also for the capacities of nonhuman animals and some artificial systems. But the claim that language-of-thought hypothesis (LoTH) is the best game in town requires a more substantial defense than Quilty-Dunn et al. provide, including more specificity about the hypothesis itself and an answer to the question “Best for what?”

We should all grant that some human cognitive processes operate over and with natural language representations. After all, we internalize the culturally acquired structures of language, logic, and mathematics, and regiment our thinking accordingly. We can explicitly think thoughts in linguistic form, so some processes must represent and operate on language-like structures, including discrete constituents, logical operations, predicate–argument structure, and so on. Certainly, models that involve language-like representations can reproduce psychological data; this is especially so when the tasks are posed in language and engage rules of inference that exploit predicate–argument structure, and so on, in execution. Because LoTH is itself modeled on characteristics of natural language, these processes will a fortiori be well modeled by LoTH. However, Quilty-Dunn et al. make a more sweeping claim: They argue that the utility of LoT is pervasive and the best way to model thought even in nonlinguistic creatures. Here, too, we note that if you describe your tasks and results in language-like ways, you may find models that incorporate language-like elements natural to turn to. We contend that the utility of LoTH for abstractly characterized cognitive processes does not entail that LoTH is the best way to model these processes in more detail, or even at the same level of abstraction, unless what counts as an LoT is so attenuated as to be nearly universally applicable.

To the extent that their examples suggest that LoTH is the best game in town, it is because their homeostatic property cluster view of LoT allows them to embrace most representational formats, including many never before conceived of as examples of the LoT. For instance, they argue that object files possess at least five of the six features of LoT representations, and they use this to support their claim that the LoT format is a better fit to object files than Carey's iconic account. But if this is so, then the same seems to hold for the feature maps in Triesman's (Reference Treisman1998) attentional model, and LoTH seems to have been weakened almost to the point of vacuity. In Triesman's model, a spotlight of attention potentiates processing in cognate spatial regions of disparate feature maps. This model is not particularly language-like: It is not generative, not serial, not recursive, and does not have discrete word-like tokens. However, the feature binding can be seen as implementing predicate–argument structure (“these features collocated in this spatial location”), the maps represent discrete features with abstract content like color and shape that can be reused in different bindings, providing a kind of role-filler independence that allows the system to bind the same features to different objects and different features to the same object. However, if map-like models are also LoT models, then the hypothesis fails to differentiate between cognitively very different kinds of solutions – it is too general to do much work. And if map-like models are not instances of the LoT, then Quilty-Dunn et al. have failed to argue that their LoT-based account of object files is superior to one based on icons or maps.

The above discussion highlights the need for clearer statements about if and when the six properties characteristic of LoT are instantiated. As the example of object files shows, their approach is to give an LoT-inspired interpretive dance after the theory is already on offer. However, terms such as “discrete” and “inferentially promiscuous” (target article, sect. 3, para. 6) are too vague to guide cognitive scientists in constructing theories that make testable predictions about behavior and neural mechanisms in humans and other animals. This does not entail that they are useless for picking out a class of models, but class membership then is largely in the eye of the beholder. Moreover, Quilty-Dunn et al. miss two important points from the philosophy-of-science of modeling. One is that the formal expression of a model has features that should not be attributed to the target system (Andrews, Reference Andrews2021; Beer, Reference Beerin prep.). The other is that models serve particular scientists’ purposes more or less well. We think it salutary to recall Box's maxim that “all models are wrong, but some are useful” (Box, Reference Box1976). All models are approximations to the phenomena they model, and different models highlight different aspects of those phenomena. For example, Bayesian models involve computations over probabilities assigned to propositions or sets of propositions, in terms of priors on hypotheses and statements of evidence. Those models explicitly calculate posteriors using discrete elements and logical operators. Quilty-Dunn et al. cite these as another instance of the successes of LoTH. However, few, if any, think that our brains explicitly represent Bayes theorem or calculate the posteriors by crunching numbers (except when forced to in math classes, etc.). Rather, Bayes theorem is thought to be an analytical optimal representation that is only approximated by brain mechanisms that work according to different principles. If one is interested in how the brain computes Bayes-like posteriors, LoT may not be helpful at all. Thus although LoT may be a useful model for some phenomena for some applications, it will not be for others. It is certainly not the only game in town, nor is it always the best.

Financial support

This research received no specific grant from any funding agency, commercial, or not-for-profit sectors.

Competing interest

None.

References

Andrews, M. (2021). The math is not the territory: Navigating the free energy principle. Biology & Philosophy, 36(3), 30. doi:10.1007/s10539-021-09807-0CrossRefGoogle Scholar
Beer, R. B. (in prep.). On the proper treatment of dynamics in Cognitive Science.Google Scholar
Box, G. E. P. (1976). Science and statistics. Journal of the American Statistical Association, 71(356), 791799, doi:10.1080/01621459.1976.10480949CrossRefGoogle Scholar
Treisman, A. (1998). Feature binding, attention and object perception. Philosophical Transactions of the Royal Society of London B: Biological Science, 1373, 12951306. doi:10.1098/rstb.1998.0284CrossRefGoogle Scholar