Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-11T05:25:49.676Z Has data issue: false hasContentIssue false

Representational structures only make their mark over time: A case from memory

Published online by Cambridge University Press:  28 September 2023

Sara Aronowitz*
Affiliation:
Department of Philosophy, University of Toronto, Toronto, ON, Canada s.aronowitz@utoronto.ca http://www-personal.umich.edu/~skaron/

Abstract

Memory structures range across the dimensions that distinguish language-like thought. Recent work suggests agent- or situation-specific information is embedded in these structures. Understanding why this is, and pulling these structures apart, requires observing what happens under major changes. The evidence presented for the language-of-thought (LoT) does not look broadly enough across time to capture the function of representational structure.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

The authors posit a single-format type, the language-of-thought (LoT), across both cognitive contexts and kinds of processing. Alongside asking whether this is true, we might also ask when it is true. In this commentary, I'll look at these two questions through the lens of long-term memory.

O'Keefe and Nadel (Reference O'Keefe and Nadel1978) made the case for a map-like format in the hippocampus, a structure distinct from the LoT. Although maps have symbolic elements, such as the graphical nature of a map of a subway route, they are also continuous and holistic in a way that LoT structures are not. I am unsure of whether the authors intended long-term memory to be a part of the swath of cognition covered by their theory, and to rehearse the map or language debate is not my aim here. I want instead to take the hippocampal map hypothesis as a starting point to discuss two newer developments.

In their original book, O'Keefe and Nadel described a cognitive map that represents objective space abstracted from the creature's particular interactions with the environment. They note, for instance: “unlike the extra-hippocampal systems the locale system is relatively free from the effects of time and repetition” (p. 95). This idea of a map indifferent to the agent's path through it, and not greatly changed by repetition, shares some key similarities with the LoT format: Both can be updated quickly and categorically with new information, such as when I reorder a logical inference because of the introduction of a new proposition or remap a path based on the observation of a new obstacle.

In the case of memory, although the core idea of the hippocampus as a map is still popular, the notion of a fully objective map has been brought under strain. Work in reinforcement learning has proposed some of the computational work done by the hippocampus may employ a representation that is more path-dependent than a fully objective map: The successor representation (Dayan, Reference Dayan1993; Gershman, Reference Gershman2018; Momennejad et al., Reference Momennejad, Russek, Cheong, Botvinick, Daw and Gershman2017). This representation stores expected (temporally discounted) connections between a state and the next states that will be visited. Using this representation in planning and learning is efficient. Standard successor learners can quickly figure out how to change course when they learn that rewards are redistributed (such as finding out a bet has doubled) but not when they learn that transitions between states have changed (such as finding out that pushing a button leads to a new floor). These learners make a compromise, easier computations at the cost of giving up some of the properties that LoT and map-like structures share.

Ormond and O'Keefe (Reference Ormond and O'Keefe2022) observe a different feature of hippocampal maps that seems to violate the idea that these maps are fully objective. During goal-directed behavior, they find that some place cells in rats represent their environment in a way that is distorted by increased firing when the animal is facing the goal, and incrementally decreased firing in directions farther from the goal. In a set of cells that were thought to represent invariant features of the environment, these patterns reflect goal location and shift during goal change.

In these two cases, researchers have theorized that initially indifferent and objective maps might encode some agent-centered information: Visited states and transitions in the successor representation, and goal location in the place cell subtypes. This leads to two connections with the LoT hypothesis.

First, although the authors acknowledge that the LoT is not the only format used in thought, they do not go far enough in thinking about how formats can be shifted and combined. In the case of map-like formats in memory, we've seen that the hypothesis that goal and path information seeps into the map does not mean a switch between discrete ways of representing but instead a sort of hybrid or intermediate representation. The successor representation is functionally in between a map-like model and a model-free algorithm, and Momennejad et al. go further to propose a hybrid successor/model-based learner. The goal-sensitive place cells are still place cells and are presumably used in navigation alongside the previously observed map-like structure.

If the six features noted by the authors are characteristic of one end of a spectrum of cognitive processing, noting that they obtain in various cases is not enough for thought to have a linguistic format. This is because such a system – not just one that comes in degrees of LoT-ishness, but one that instantiates these degrees in interlocking representations that work together – must be explained through the connections between formats, rather than solely computations within a format. This renders the language-of-thought hypothesis somewhat toothless.

The second issue is about change in format over time. The authors take the relationship between more and less abstract representational systems to be one of realization, in the good case. Thinking about goal information in memory suggests an alternative: Even functional similarity may be a temporary and shallow equivalence.

If we start by looking at the cognition of a creature that has already learned a model of the world, and over a period where no substantial learning occurs, we should expect a period of equivalence between representational structures. But if this equivalence will not characterize how the representations were learned nor how they will shift and change, no more than we should expect a neural network that produces human-like behaviors to have acquired its expertise in a way even vaguely related to the way a human would. And once developed, the change, decay, or warping under pressure in these structures might again break the equivalence.

Memory structures, once hypothesized to be fully objective and agent-indifferent, now seem to contain some elements of goal or behavior dependence. The lesson of this for the LoT theory is that representational structures are ultimately distinguished in learning, decay, and other forms of change – not in stasis.

Competing interest

None.

References

Dayan, P. (1993). Improving generalization for temporal difference learning: The successor representation. Neural Computation, 5(4), 613624.CrossRefGoogle Scholar
Gershman, S. J. (2018). The successor representation: Its computational logic and neural substrates. Journal of Neuroscience, 38(33), 71937200.CrossRefGoogle ScholarPubMed
Momennejad, I., Russek, E. M., Cheong, J. H., Botvinick, M. M., Daw, N. D., & Gershman, S. J. (2017). The successor representation in human reinforcement learning. Nature Human Behaviour, 1(9), 680692.CrossRefGoogle ScholarPubMed
O'Keefe, J. O., & Nadel, L. (1978). The hippocampus as a cognitive map. Clarendon Press.Google Scholar
Ormond, J., & O'Keefe, J. (2022). Hippocampal place cells have goal-oriented vector fields during navigation. Nature, 607(7920), 741746.CrossRefGoogle ScholarPubMed