Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-26T05:01:56.241Z Has data issue: false hasContentIssue false

Integrative experiments require a shared theoretical and methodological basis

Published online by Cambridge University Press:  05 February 2024

Pietro Amerio*
Affiliation:
Consciousness Cognition and Computation Group, Center for Research in Cognition & Neurosciences, Université Libre de Bruxelles, Brussels, Belgium pietro.amerio@ulb.be nicolas.coucke@ulb.be axel.cleeremans@ulb.be https://axc.ulb.be/
Nicolas Coucke
Affiliation:
Consciousness Cognition and Computation Group, Center for Research in Cognition & Neurosciences, Université Libre de Bruxelles, Brussels, Belgium pietro.amerio@ulb.be nicolas.coucke@ulb.be axel.cleeremans@ulb.be https://axc.ulb.be/ IRIDIA, Université Libre de Bruxelles, Brussels, Belgium
Axel Cleeremans
Affiliation:
Consciousness Cognition and Computation Group, Center for Research in Cognition & Neurosciences, Université Libre de Bruxelles, Brussels, Belgium pietro.amerio@ulb.be nicolas.coucke@ulb.be axel.cleeremans@ulb.be https://axc.ulb.be/
*
*Corresponding author.

Abstract

Creating an integrated design space can be successful only if researchers agree on how to define and measure a certain phenomenon of interest. Adversarial collaborations and mathematical modeling can aid in reaching the necessary level of agreement when researchers depart from different theoretical perspectives.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

We agree with Almaatouq et al.'s target article that there is a need for addressing the incommensurability of behavioral experiments and we support the proposed integrative design framework. However, we would like to highlight that the incommensurability of experimental results might stem not only from differences in experimental conditions or populations, but also from disagreements on how to define and measure the phenomenon of interest. While reaching a consensus is not strictly necessary for research to progress, we argue that the success of the integrative approach critically depends on finding agreed-upon theoretical and methodological frameworks.

As an analogy, imagine two researchers, Scarlett and Amber, who study the phenomenon of “pinkness.” Scarlett uses a design space that has three dimensions, corresponding to the three base colors of the RGB system (i.e., red, green, and blue). After experimenting with various color combinations, she identifies the area of RGB space in which the color pink is produced. Amber, however, defined her experiments in the YMCK color space. How can Scarlett and Amber's experiments be integrated in a single design space? Two conditions should be met. First, the definition of what “pink” is must be shared between the two scientists. If the range of colors that Amber classifies as “pink” is wider than Scarlett's, then mapping the results of their experiments onto each other is meaningless. Once there is agreement on the definition of pinkness, the second condition is to have a means of translating the ranges under which the phenomenon occurs from RGB space to YMCK space. Such a translation is quite straightforward in our analogy, but it can become a lot more complex when real experimental paradigms are involved.

Our main point here is that the integrative approach can only be successful when researchers agree on the definition of their phenomenon of interest. This is crucial because the definition affects experiment design. As a concrete example, consider perceptual awareness studies, where different researchers have used many different measures of awareness (Timmermans & Cleeremans, Reference Timmermans, Cleeremans and Overgaard2015). A recent meta-analysis by Yaron, Melloni, Pitts, and Mudrik (Reference Yaron, Melloni, Pitts and Mudrik2022) found a clear association between the methodological design of an experiment and the theory of consciousness favored by the researchers: An algorithm could even predict which theory an experiment was testing based on its methodology alone! A central issue in this literature concerns whether awareness of a sensory stimulus should be measured subjectively (i.e., via explicit reports from the participants) or objectively (i.e., as performance in a forced-choice task). Crucially, the two methods rest on different definitions of awareness. Objective measures assume that participants can discriminate stimuli correctly only if they are aware of them, while subjective measures rest on the assumption that awareness can diverge from discrimination performance. The difference is not trivial because it forces researchers to adopt substantially different experimental strategies. When testing for the existence of unconscious perception, for example, subjective approaches relate explicit report to discrimination performance, while objective approaches compare discrimination performance to implicit measures of perceptual processing, such as reaction times (e.g., Dehaene et al., Reference Dehaene, Naccache, Le Clec'H, Koechlin, Mueller, Dehaene-Lambertz and Le Bihan1998). Integrating these two research lines in a common experimental space would be unsuccessful because, depending on the definition of awareness one adopts, the task design, the collected measures, and the interpretation of results will differ.

One could attempt to circumvent the problem by placing the two approaches into one large design space and connecting them along an additional dimension representing the “measurement method”. However, this strategy is only feasible when the measures share the same theoretical basis. As mentioned in the target article, the design space should reveal the conditions under which a phenomenon emerges. To fulfill this function, it is crucial that the phenomenon of interest is precisely defined. Experiments that rest on opposing theoretical views are generally aimed at detecting (slightly) differently defined phenomena. As such, forcing them in a common design space means building a space in which the effect of a particular range of parameters remains ambiguous. In addition, even when the definition of the phenomenon is agreed upon, each experimental paradigm comes with a set of paradigm-specific parameters. Thus, their integration would require a method of mapping different design spaces onto each other, which might not be straightforward. Below, we suggest two potential tools for resolving such conundrums.

The first is adversarial collaborations. These initiatives bring together researchers with contrasting theoretical views and motivate them to design experiments that directly test one theory against another (Cleeremans, Reference Cleeremans2022; Melloni, Mudrik, Pitts, & Koch, Reference Melloni, Mudrik, Pitts and Koch2021). Such approaches are currently flourishing in consciousness research (e.g., Melloni et al., Reference Melloni, Mudrik, Pitts, Bendtz, Ferrante, Gorska and Tononi2023). As discussed above, the definition of the phenomenon of interest (i.e., its theoretical basis) is crucial for constructing the design space. By testing predictions of different theories against one another, adversarial collaborations can help researchers decide on one definition around which to build (and explore) a full design space. On a parallel line, adversarial collaborations can result in agreed upon methods to map theoretical frameworks onto one another.

The second tool we recommend is mathematical modeling, which can be particularly helpful when results from different experimental paradigms need to be related. The shape of the design space is specific to the paradigm and relating spaces with different shapes is not always possible. Modeling helps in this task by creating a shared analysis space in which results obtained via different measures can be juxtaposed. We draw another example from perceptual awareness research: King and Dehaene (Reference King and Dehaene2014) were able to juxtapose results from six major lines of experiments by constructing an overarching mathematical framework, in which results stemming from unconnectable design spaces can be directly compared.

In conclusion, while strongly supporting the integrative experiments approach, our commentary highlights how it might not be possible to reconcile experiments that adopt different theoretical views on the effect of interest. As such, the usefulness of the approach might depend on the researchers' agreement upon adequate measures and theories. When lacking, tools like adversarial collaborations and mathematical models can help constructing a common design space or connecting otherwise isolated spaces.

Financial support

P. A. was supported by an F.R.S.-FNRS Research Project T003821F (40003221) to Axel Cleeremans. N. C. was supported by the program of Concerted Research Actions (ARC) of the Université Libre de Bruxelles. A. C. is a research director with the F.R.S.-FNRS (Belgium).

Competing interest

None.

References

Cleeremans, A. (2022). Theory as adversarial collaboration. Nature Human Behaviour, 6(4), 485486. https://doi.org/10.1038/s41562-021-01285-4CrossRefGoogle ScholarPubMed
Dehaene, S., Naccache, L., Le Clec'H, G., Koechlin, E., Mueller, M., Dehaene-Lambertz, G., … Le Bihan, D. (1998). Imaging unconscious semantic priming. Nature, 395(6702), 597600. https://doi.org/10.1038/26967CrossRefGoogle ScholarPubMed
King, J.-R., & Dehaene, S. (2014). A model of subjective report and objective discrimination as categorical decisions in a vast representational space. Philosophical Transactions of the Royal Society B: Biological Sciences, 369(1641), 20130204. https://doi.org/10.1098/rstb.2013.0204CrossRefGoogle Scholar
Melloni, L., Mudrik, L., Pitts, M., Bendtz, K., Ferrante, O., Gorska, U., … Tononi, G. (2023). An adversarial collaboration protocol for testing contrasting predictions of global neuronal workspace and integrated information theory. PLoS ONE, 18(2), e0268577. https://doi.org/10.1371/journal.pone.0268577CrossRefGoogle ScholarPubMed
Melloni, L., Mudrik, L., Pitts, M., & Koch, C. (2021). Making the hard problem of consciousness easier. Science (New York, N.Y.), 372(6545), 911912. https://doi.org/10.1126/science.abj3259CrossRefGoogle ScholarPubMed
Timmermans, B., & Cleeremans, A. (2015). How can we measure awareness? An overview of current methods. In Overgaard, M. (Ed.), Behavioral methods in consciousness research (pp. 2146). Oxford University Press. https://doi.org/10.1093/acprof:oso/9780199688890.003.0003CrossRefGoogle Scholar
Yaron, I., Melloni, L., Pitts, M., & Mudrik, L. (2022). The ConTraSt database for analysing and comparing empirical studies of consciousness theories. Nature Human Behaviour, 6(4), 593604. https://doi.org/10.1038/s41562-021-01284-5CrossRefGoogle ScholarPubMed