Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-25T16:41:53.534Z Has data issue: false hasContentIssue false

Meta-learning goes hand-in-hand with metacognition

Published online by Cambridge University Press:  23 September 2024

Chris Fields
Affiliation:
Allen Discovery Center, Tufts University, Medford, MA, USA fieldsres@gmail.com https://chrisfieldsresearch.com
James F. Glazebrook*
Affiliation:
Department of Mathematics and Computer Science, Eastern Illinois University, Charleston, IL, USA jfglazebrook@eiu.edu Adjunct Faculty (Mathematics), University of Illinois at Urbana-Champaign, Urbana, IL, USA https://faculy.math.illinois.edu/glazebro/
*
*Corresponding author.

Abstract

Binz et al. propose a general framework for meta-learning and contrast it with built-by-hand Bayesian models. We comment on some architectural assumptions of the approach, its relation to the active inference framework, its potential applicability to living systems in general, and the advantages of the latter in addressing the explanation problem.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Binz et al. craft a comprehensive outline for advancing meta-learning (MetaL) on the basis of several arguments concerning the tractability of optimal learning algorithms, manipulation of complexity, and integration into the rational aspects of cognition, all seen as basic requirements for a domain-general model of cognition. Architectural features include an inductive process from experience driven by repetitive interaction with the environment, necessitating (i) an inner loop of “base learning,” and (ii) an outer loop (or MetaL) process through which the system is effectively trained by the environment to ameliorate its inner loop learning algorithms. A key aspect of the model is its dependence on the relation between the typical duration of a (general, MetaL) problem-solving episode and the typical duration of a (particular, learned) solution.

While Binz et al. focus on MetaL as a practical methodology for modeling human cognition, it is also interesting to ask how MetaL as Binz et al. describe it, fits into the conceptual framework of cognition in general, and also to ask how it applies both to organisms other than humans and to artificial (or hybrid) systems operating in task environments very different from the human task environment. From a broad perspective, MetaL is one function of metacognition (e.g., Cox, Reference Cox2005; Flavell, Reference Flavell1979; Shea & Frith, Reference Shea and Frith2019). Both MetaL and metacognition more generally engage memory and attention as they are neurophysiologically enacted by brain regions including the default mode network (Glahn et al., Reference Glahn, Winkler, Kochunov and Blangero2010), as reviewed for the two theories in Wang (Reference Wang2021) and Kuchling, Fields, and Levin (Reference Kuchling, Fields and Levin2022), respectively.

When MetaL is viewed as implemented by a metaprocessor that is a proper component of a larger cognitive system, one can ask explicitly about the metaprocessor's task environment and how it relates to the larger system's task environment. MetaL operates in a task environment of learning algorithms and outcomes, or equivalently, a task environment of metaparameters and test scores. How the latter are measured is straightforward for a human modeler employing MetaL as a methodology, but is less straightforward when an explicit system-scale architecture must be specified. The question in this case becomes that of how the object-level components of a system use the feedback received from the external environment to train the metaprocessor. The answer cannot, on pain of infinite regress, be MetaL. The relative inflexibility of object-level components as “trainers” of their associated metaprocessors effectively bakes in some level of non-optimality in any multilayer system.

Binz et al. emphasize that MetaL operates on a longer timescale than object-level learning. Given a task environment that imposes selective pressures with different timescales, natural selection will drive systems toward layered architectures that exhibit MetaL (Kuchling et al., Reference Kuchling, Fields and Levin2022). Indeed the need for a “learning to learn” capability has long been emphasized in the active-inference literature (e.g., Friston et al., Reference Friston, FitzGerald, Rigoli, Schwartenbeck, O'Doherty and Pezzulo2016). Active inference under the free-energy principle (FEP) is in an important sense “just physics” (Friston, Reference Friston2019; Friston et al., Reference Friston, Da Costa, Sakthivadivel, Heins, Pavliotis, Ramstead and Parr2023; Ramstead et al., Reference Ramstead, Sakthivadivel, Heins, Koudahl, Millidge, Da Costa and Friston2022); indeed the FEP itself is just a classical limit of the principle of unitarity, that is, of conservation of information (Fields et al., Reference Fields, Fabrocini, Friston, Glazebrook, Hazan, Levin and Marcianò2023; Fields, Friston, Glazebrook, & Levin, Reference Fields, Friston, Glazebrook and Levin2022). One might expect, therefore, that MetaL as defined by Binz et al. is not just useful, but ubiquitous in physical systems with sufficient degrees of freedom. As this is at bottom a question of mathematics, testing it does not require experimental investigation.

What does call out for experimental investigation is the extent to which MetaL can be identified in systems much simpler than humans. Biochemical pathways can be trained, via reinforcement learning, to occupy different regions of their attractor landscapes (Biswas, Manika, Hoel, & Levin, Reference Biswas, Manika, Hoel and Levin2021, Reference Biswas, Clawson and Levin2022). Do sufficiently complex biochemical networks that operate on multiple timescales exhibit MetaL? Environmental exploration and learning are ubiquitous throughout phylogeny (Levin, Reference Levin2022, Reference Levin2023); is MetaL equally ubiquitous? Learning often amounts to changing the salience distribution over inputs, or in Bayesian terms, adjusting precision assignments to priors. To what extent can we describe the implementation of MetaL by organisms in terms of adjustments of sensitivity/salience landscapes – and hence attractor landscapes – on the various spaces that compose their umwelts?

As Binz et al. point out, in the absence of a mechanism for concrete mathematical analysis, MetaL forsakes interpretable analytic solutions and hence generates an “explanation problem” (cf. Samek, Montavon, Lapuschkin, Anders, & Müller, Reference Samek, Montavon, Lapuschkin, Anders and Müller2021). As in the case of deep AI systems more generally, experimental techniques from cognitive psychology may be the most productive approach to this problem for human-like systems (Taylor & Taylor, Reference Taylor and Taylor2021). Relevant to this is an associated spectrum of ideas, including how problem solving is innately perceptual, how inference is “Bayesian satisficing” not optimization (Chater, Reference Chater2018; Sanborn & Chater, Reference Sanborn and Chater2016), the relevance of heuristics (Gigerenzer & Gaissmaier, Reference Gigerenzer and Gaissmaier2011; cf. Fields & Glazebrook, Reference Fields and Glazebrook2020), and how heuristics, biases, and confabulation limit reportable self-knowledge (Fields, Glazebrook, & Levin, Reference Fields, Glazebrook and Levin2024). Here again, the possibility of studying MetaL in more tractable experimental systems in which the implementing architecture can be manipulated biochemically and bioelectrically, may offer a way forward not available with either human subjects or deep neural networks.

Financial support

The authors have received no funding towards this contribution.

Competing interests

None.

References

Biswas, S., Clawson, W., & Levin, M. (2022). Learning in transcriptional network models: Computational discovery of pathway-level memory and effective interventions. International Journal of Molecular Sciences, 24, 285.CrossRefGoogle ScholarPubMed
Biswas, S., Manika, S., Hoel, E., & Levin, M. (2021). Gene regulatory networks exhibit several kinds of memory: Quantification of memory in biological and random transcriptional networks. IScience, 24, 102131.CrossRefGoogle ScholarPubMed
Chater, N. (2018). The mind is flat. The remarkable shallowness of the improvising brain. Yale University Press.Google Scholar
Cox, M. T. (2005). Metacognition in computation: A selected research review. Artificial Intelligence, 169, 104141.CrossRefGoogle Scholar
Fields, C., Fabrocini, F., Friston, K. J., Glazebrook, J. F., Hazan, H., Levin, M., & Marcianò, A. (2023). Control flow in active inference systems, part I: Classical and quantum formulations of active inference. IEEE Transactions on Molecular, Biological, and Multi-Scale Communications, 9, 235245.CrossRefGoogle Scholar
Fields, C., Friston, K. J., Glazebrook, J. F., & Levin, M. (2022). A free energy principle for generic quantum systems. Progress in Biophysics and Molecular Biology, 173, 3659.CrossRefGoogle ScholarPubMed
Fields, C., & Glazebrook, J. F. (2020). Do process-1 simulations generate the epistemic feelings that drive process-2 decision making? Cognitive Processing, 21, 533553.CrossRefGoogle ScholarPubMed
Fields, C., Glazebrook, J. F., & Levin, M. (2024). Principled limitations on self-representations for generic physical systems. Entropy, 26(3), 194.CrossRefGoogle Scholar
Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive-developmental inquiry. American Psychologist, 34, 906.CrossRefGoogle Scholar
Friston, K. J. (2019). A free energy principle for a particular physics, Preprint arxiv:1906.10184.Google Scholar
Friston, K. J., Da Costa, L., Sakthivadivel, D. A. R., Heins, C., Pavliotis, G. A., Ramstead, M. J., & Parr, T. (2023) Path integrals, particular kinds, and strange things. Physics of Life Reviews, 47, 3562.CrossRefGoogle ScholarPubMed
Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., O'Doherty, J., & Pezzulo, G. (2016). Active inference and learning. Neuroscience & Biobehavioral Reviews, 68, 862879.CrossRefGoogle ScholarPubMed
Gigerenzer, G., & Gaissmaier, W. (2011). Heuristic decision making. Annual Review of Psychology, 62, 451482.CrossRefGoogle ScholarPubMed
Glahn, D. C., Winkler, A. M., Kochunov, P., & Blangero, J. (2010) Genetic control over the resting brain. Proceedings of the National Academy of Sciences of the USA, 10(7), 12231228.CrossRefGoogle Scholar
Kuchling, F., Fields, C., & Levin, M. (2022). Metacognition as a consequence of competing evolutionary time scales. Entropy, 24, 601.CrossRefGoogle ScholarPubMed
Levin, M. (2022). Technological approach to mind everywhere: An experimentally-grounded framework for understanding diverse bodies and minds. Frontiers in Systems Neuroscience, 16, 768201.CrossRefGoogle ScholarPubMed
Levin, M. (2023). Darwin's agential materials: Evolutionary implications of multiscale competency in developmental biology. Cellular and Molecular Life Sciences, 80(6), 142.CrossRefGoogle ScholarPubMed
Ramstead, M. J.,Sakthivadivel, D. A. R., Heins, C., Koudahl, M., Millidge, B., Da Costa, L., … Friston, K. J. (2022). On Bayesian mechanics: A physics of and by beliefs. Interface Focus, 13(2923), 20220029.Google Scholar
Samek, W., Montavon, G., Lapuschkin, S., Anders, C. J., & Müller, K.-R. (2021) Explaining deep neural networks and beyond: A review of methods and applications. Proceedings of the IEEE, 109, 247278.CrossRefGoogle Scholar
Sanborn, A. N., & Chater, N. (2016). Bayesian brains without probabilities. Trends in Cognitive Sciences, 20(12), 883893.CrossRefGoogle ScholarPubMed
Shea, N., & Frith, C. D. (2019). The global workspace needs metacognition. Trends in Cognitive Sciences, 23, 560571.CrossRefGoogle ScholarPubMed
Taylor, J. E. T., & Taylor, G. W. (2021). Artificial cognition: How experimental psychology can help generate artificial intelligence. Psychonomic Bulletin and Review, 28, 454475.CrossRefGoogle ScholarPubMed
Wang, J. X. (2021). Meta-learning in artificial and natural intelligence. Current Opinion in Behavioral Sciences, 38, 9095.CrossRefGoogle Scholar