Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-26T15:02:56.007Z Has data issue: false hasContentIssue false

The role of language in transcending core knowledge

Published online by Cambridge University Press:  27 June 2024

Susan Carey*
Affiliation:
Psychology, Harvard University, MA, Cambridge, USA scarey@wjh.harvard.edu
*
*Corresponding author.

Abstract

What Babies Know (WBK) argues that core knowledge has a unique place in cognitive architecture, between fully perceptual and fully conceptual systems of representation. Here I argue that WBK's core knowledge is on the perception side of the perception/cognition divide. I discuss some implications of this conclusion for the roles language learning might play in transcending core knowledge.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Spelke's monumental What Babies Know (Spelke, Reference Spelke2022) provides evidence for six domains of core knowledge: Innate systems of abstract, structured, representations with long evolutionary histories. In The Origin of Concepts (TOOC: Carey, Reference Carey2009), I drew on Spelke's work to provide evidence for core knowledge, and I welcome and endorse Spelke's extended and more nuanced characterization in WBK.

Spelke and I each distinguish core knowledge from both perception and cognition. This has two parts: First, showing how perception and cognition differ, and second, characterizing how core knowledge has properties of each and lacks some properties of each. Here I argue for a different way of characterizing the difference between perception and cognition from that in TOOC and WBK that places core knowledge firmly on the perception side of the border.

Distinguishing perception from cognition

WBK and TOOC argue that perception is closer to sensory information than is cognition. Spelke adds that perception is modality specific. On the conception side, both point to the abstractness of the representations in core knowledge, and Spelke adds that core knowledge often has perceptual representations as crucial input. The problem is that these properties don't really distinguish perception from cognition. Perceptual systems often involve a series of computations, each with perceptual representations as their input (consider the representations of faces in successive face patches; Hesse & Tsao, Reference Hesse and Tsao2020). Clear cases of perception (e.g., of the immediate spatial layout, in the service of reaching for objects) are amodal and computed by integrating information from audition, vision, and proprioception. Perceptual processes arguably create abstract representations (e.g., cause as in Michotte causality, cardinal value, happy expression, which all show hallmark signatures of perception, such as retinotopically specific adaptation effects; see Block, Reference Block2022). Such considerations have led many to deny a joint in nature between perception and cognition, with the consequence that core knowledge could not lie between the two.

To the contrary, I believe there is a deep divide between perception and cognition. Its essence is a fundamental difference in the kinds of representations within each: Differences in formats, and the kinds of structured representations and computations supported. With respect to format, Block (Reference Block2022) argues that perceptual representations have iconic format, whereas conceptual representations have discursive format. Beck (Reference Beck2019) argues that perception is analog. Of course, analog and iconic representations can play a role in cognition, but cognition also involves logically structured representations, formulated over representations of structured propositions, and these are what distinguish cognition from perception.

In TOOC I speculate that the format of representation of core knowledge is iconic; whereas Spelke explicitly says she is agnostic about format. If my speculation is correct, then Spelke's and my claims for the place of core knowledge in cognitive architecture as its own natural kind between cognition and perception are false. If the format of the structured representations within core knowledge is exclusively iconic or analog, then core knowledge is a perceptual system of representation.

Structured representations: Iconic/analog versus propositional formats

Iconic representations are map like: They represent relations among parts of what is being represented with symbols that themselves instantiate those relations. The representations resemble or mirror (in Block's terminology) what is being represented. Pictures, including line drawings, are iconic symbols, as are maps and movies.

Analog representations involve symbols that are linear or logarithmic function values along a dimension that varies continuously from small to big. There are analog representations of many dimensions: Of object size and weight, of number and density of ensembles, of temporal duration and loudness of sounds, of intensity of pain, and so on. While not map-like, there is a clear sense in which analog symbols are themselves iconic. A set of four is contained in a set of five; an analog symbol of five contains an analog symbol of four. Spelke establishes that there is an analog system of core number knowledge, the Analog Number System (ANS).

In contrast, propositional formats contain atomic symbols that do not mirror their referents and that participate in very different types of structured representation from maps and analog magnitude systems. The units of a proposition are typed syntactically, and these types determine the rules of combination into complex phrases and whole propositions. Clearly, language has a propositional format with words as units, composed into phrases, which in turn are still parts of sentences, which are truth-evaluable propositions.

Importantly, iconic/analog structures have no explicit symbols for logical relations. When I am looking at my table, my representation does not include a dog on it. But perception has no symbol not, nor symbols for many other abstract relations such as all or same.

Iconic representation of structure in parallel individuation working memory models

In addition to the ANS, Parallel Individuation (PI) is a second system of representation with numerical content. Spelke denies that PI is a system of number representation, for unlike the ANS, it contains no summary symbols for number. Indeed, the explicit atomic symbols in PI are representations of individuals (objects, events, sounds). But PI models iconically represent relations among those individuals. Numerical content is carried in the computations that guarantee that the model of three objects contains a symbol for each object. Numerical content is also carried by the computations these models support, such as planning the right number of reaches to retrieve those objects when hidden (Feigenson & Carey, Reference Feigenson and Carey2003). Figure 1 illustrates a working memory representation for sets of one, two, and three boxes. Note, there are no symbols for number, or for the relation different which holds between any two objects. Number and many relations, such a relatives size and left to right spatial order, are represented iconically, being instantiated in the model.

Figure 1. PI models.

Hochmann (Reference Hochmann2022) shows that in the context of Marcus’ rule learning paradigm, the relations same and different are instantiated in PI models. Habituated to stimuli like “pi la pi,” “du, no, du,” “re bau re,” infants (and adults) dishabituate to novel sequences “ta ku ku” or “ta ta ku” while generalizing habituation to “ta ku ta.” Hochmann shows that the infants’ working memory representation of sequence mirrors the relations among the syllables: ABA. Note there are no symbols for same in this representation, the content same is implicit in the match computations that allow all acts of recognition. There are also no explicit symbols for number. There are no explicit representations that there are three syllables, that two of them are the same, that not all of the syllables are the same. All those relations are implicit in the ABA grammar, though.

Hochmann (Reference Hochmann2022) provides two pieces of evidence for PI representations of syllable sequences in this paradigm. First, as with all PI models, there is a strict upper bound on number of syllables that can be represented (4, under these conditions). More relevant to us here, infants spectacularly fail to learn the generalization “all syllables are the same.” They can represent what is in common between “pi pi”; “la la,” treating “du mo” as an oddball, ditto for “pi pi pi”; “la, la la,” treating “du du mo” as an outlier, and similarly for strings of four identical syllables. But familiarized to novel sequences of five or six identical syllables, they do not treat “du du du du mo” or “du du du du du mo” as outliers. They failed to represent the generalization that all the syllables in each sequence are the same. “All” and “same” have not been isolated within the iconic schema A A A.

Convergent evidence for failure to combine implicit representations of same with logical connectives (not) and (all) derives from Array Match to Sample experiments with large arrays (Fig. 2). Animals and young children can learn to match A (all-same) with novel A arrays and B (all-different) with novel B arrays. But they do so on the basis of analog magnitude representations of entropy, the degree of variability among the stimuli (Hochmann et al., Reference Hochmann, Tuerk, Sanborn, Zhu, Long, Dempster and Carey2017; Wasserman, Castro, & Fagot, Reference Wasserman, Castro, Fagot, Call, Burgardt, Pepperberg, Snowden and Zentall2017). In contrast, human beings over age 4 induce one of two propositional combinatorial rules: “Match all same with all same; not all same with not all same,” or alternatively, “match all same with all same and all different with all different.”

Figure 2. Array match to sample stimuli.

How do propositionally structured representations arise?

Chapter 10 of WBK provides a spellbinding review of language learning in infancy. Although she doesn't call it so, language learning is supported by seventh system of core knowledge, differing from the others she reviews in two only two ways. First, it's unique to humans. Second, it is the only system whose proprietary internal structure is propositional. The capacity for propositionally structure representations is innate, encapsulated within core knowledge of language.

Three roles for language transcending core knowledge: The relations same and different

Spelke and I agree that language has many roles to play in the creation of propositional representations with content that is embedded in other systems of core knowledge. She suggests that learning words for that content is important. I agree; learning the word “same” requires abstracting the relation same from the iconic schema A A A or ▬ ▬, and may well be the impetus for doing so. But the above analysis predicts that words for aspects of the iconic models of core knowledge that are explicitly represented (e.g., the individuals in the PI models, including their properties) will be easier to learn than words for relations whose content is implicit, merely instantiated, in those models. This prediction is right. Words for objects, events like jump, and sounds are fast mapped. In contrast, Hochmann, Zhu, Zhu, and Carey (Reference Hochmann, Zhu, Zhu and Careyunder review-b) show that the words for “same” and “different” are hard to learn. These words are common in speech to even 2-year-olds, and many 2-year-olds produce them. But virtually no 2-year-olds, less than 1/2 of 3-year-olds and only 80% of 4-year-olds have mastered their relational meanings.

Two kinds of bootstrapping, Gleitman's (Reference Gleitman1990) syntactic bootstrapping and Quinan bootstrapping (Carey, Reference Carey2009), play crucial roles in this process. Both leverage propositional structures within language. In syntactic bootstrapping, the child uses the natural language syntactic and semantic representations they have already learned to constrain the meanings of newly encountered words. In Quinian bootstrapping, the child uses their knowledge of the propositional structure of language to create placeholder structures involving whole suites of interrelated concepts, none of which is yet mapped to any currently manifested meaning. TOOC shows how both of these processes are involved in creating the first explicit, non-analog, representations of number, and reviews evidence from the history of science and from science education for Quinian bootstrapping.

Hochmann et al. (Reference Hochmann, Zhu, Zhu and Careyunder review-b) speculate how syntactic bootstrapping may play a role in learning the meanings of the words “same” and “different.” One consequence of this analysis is that learning the meanings of these words should immediately support conceptual combination involving all of the linguistic propositional syntactic/semantic structure the child currently has. Hochmann, Zhu, and Carey (Reference Hochmann, Zhu and Careyunder review-a) review evidence that knowing the words “same” and “different” influences animal's and young children's performance on non-linguistic tasks drawing on these relations. Suggestively, it also finds that the proportion of 3- and 4-year-olds who can follow the rule “match the card where all of the pictures are the same to the card where all the pictures are the same; match the card where all of the pictures are not the same to the card where all the pictures are not the same” matches the proportion of children of these ages who know the words “same” and “different” in Hochmann et al. (Reference Hochmann, Zhu, Zhu and Careyunder review-b). In comprehending this complex language, both children and adults make a small number of scope errors, to the same degree. Learning a discursive linguistically expressed symbol such as “same” provides immediate access to the logical functions expressed in language.

Conclusions

Here I have argued that perception and core knowledge shared with other animals is not propositionally structured. This claim is currently hotly debated, as attested by the BBS treatment of Quilty-Dunn, Porot, and Mandelbaum's (Reference Quilty-Dunn, Porot and Mandelbaum2023) target article on the current status of the language of thought hypothesis. WBK gives us further reason as a field to try to bring data to bear on the fundamental issues concerning formats of representation.

Financial support

The research reported here was funded by a James S. McDonnell Foundation Network grant “The Ontogenetic Origins of Abstract Combinatorial Thought”. PI: Susan Carey; 16 co-PIs.

References

Beck, J. (2019). Perception is analog: The argument from Weber's Law. The Journal of Philosophy, 116, 319349.CrossRefGoogle Scholar
Block, N. (2022). The border between thinking and seeing. Oxford University Press.Google Scholar
Carey, S. (2009). The origin of concepts. Oxford University Press.CrossRefGoogle Scholar
Feigenson, L., & Carey, S. (2003). Tracking individuals via object files: Evidence from infants’ manual search. Developmental Science, 6, 568584.CrossRefGoogle Scholar
Gleitman, L. (1990). The structural sources of verb meaning. Language Acquisition, 1, 355.CrossRefGoogle Scholar
Hesse, J., & Tsao, D. (2020). The macaque face patch system: A turtle's underbelly for the brain. Nature Review Neuroscience, 12, 695716.CrossRefGoogle Scholar
Hochmann, J. R. (2022). Representations of abstract relations in infancy. Open Mind, 6, 291310.CrossRefGoogle ScholarPubMed
Hochmann, J. R., Tuerk, A. S., Sanborn, S., Zhu, R., Long, R., Dempster, M., & Carey, S. (2017). Children's representations of abstract relations in relational/array match to sample tasks. Cognitive Psychology, 99, 1743.CrossRefGoogle ScholarPubMed
Hochmann, J. R., Zhu, R., & Carey, S. (under review-a). Conceptual combination of mental representations of same, not, and all requires learning the words “same” and “different.”Google Scholar
Hochmann, J. R., Zhu, R., Zhu, L., & Carey, S. (under review-b). Learning the words “same” and “different” in the preschool years.Google Scholar
Quilty-Dunn, J., Porot, N., & Mandelbaum, E. (2023). The best game in town: The reemergence of the language-of-thought hypothesis across the cognitive sciences. Behavioral and Brain Sciences, 46, 155.CrossRefGoogle Scholar
Spelke, E. S. (2022). What babies know: Core knowledge and composition. Oxford University Press.CrossRefGoogle Scholar
Wasserman, E., Castro, S., & Fagot, K (2017). Relational thinking in animals and humans: From percepts to concepts. In Call, J., Burgardt, G. M., Pepperberg, I. M., Snowden, C. T., & Zentall, T. (Eds), APA handbook of comparative psychology: Perception, learning & cognition (pp. 359384). American Psychological Association.CrossRefGoogle Scholar
Figure 0

Figure 1. PI models.

Figure 1

Figure 2. Array match to sample stimuli.