Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-27T05:07:48.946Z Has data issue: false hasContentIssue false

Neurological evidence for the context-independent multisensorial semantics of ideophones in Pastaza Kichwa: an fNIRS study in the Ecuadorian Amazon

Published online by Cambridge University Press:  15 January 2024

Dan P. Dewey*
Affiliation:
Department of Linguistics, Brigham Young University, Provo, UT, USA
Jeffrey J. Green
Affiliation:
Department of Linguistics, Brigham Young University, Provo, UT, USA
Janis Nuckolls
Affiliation:
Department of Linguistics, Brigham Young University, Provo, UT, USA
Auna Nygaard
Affiliation:
Independent Researcher, Orlando, FL, USA
Todd D. Swanson
Affiliation:
SHPRS Department, Arizona State University, Tempe, AZ, USA
*
Corresponding author: Dan P. Dewey; Email: ddewey@byu.edu
Rights & Permissions [Opens in a new window]

Abstract

Ideophones – imitative words using the stream of speech to simulate/depict the rise and fall of sensory perceptions and emotions and temporal experiences of completiveness, instantaneousness, and repetitiveness – have been characterized as semantically empty and context-dependent. The research reported here tested a simple schematic for the semantic categories of Pastaza Kichwa ideophones by tracking neurological responses to ideophones categorized as VISUAL, MOTION, and SOUND. Seventeen native speakers of Pastaza Kichwa listened to audio clips of ideophones extracted from sentential contexts. Subjects’ neural activity was assessed using functional near-infrared spectroscopy. Results demonstrate that these posited semantic categories activate areas of the brain associated with visualization, motion, and sound processing and production, respectively. This suggests that these ideophones convey semantic information related to these concepts, independent of context. This contrasts with what would be expected by theories suggesting that ideophones on their own are semantically empty. The data give rise to questions regarding whether any language contains only sound ideophones that do not carry additional sensory information and whether ideophones in previous studies treated strictly as sound ideophones might require greater specification of their semantics, specifically from a multisensorial perspective.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

Ideophones are imitative words featuring linguistic sounds, bodily gestures, intonation, and facial expressions to depict sensory perceptions, emotions, and temporal experiences of completiveness, instantaneousness, and repetitiveness. They are also succinctly defined as an open class of marked words (Dingemanse, Reference Dingemanse, Akita and Pardeshi2019), and their formal properties have been given ample attention (Akita, Reference Akita, Iwasaki, Sells and Akita2016; Dingemanse & Akita, Reference Dingemanse and Akita2017; Voeltz & Kilian-Hatz, Reference Voeltz and Kilian-Hatz2001). However, there are fewer studies focused on the meanings they can depict independent of context, specifically in terms of sensory features.

The existence of semanticity for ideophones has even been called into question. Bodomo (Reference Bodomo2006, p. 207) has cautiously stated that ideophones ‘do not seem to have independent semantics’, although he concedes that ‘it is not impossible to pin them down and assign denotational dictionary meanings to them’. However, Moshi (Reference Moshi1993), arguing for a functional definition of KiVunjo Chaga ideophones (Bantu), has asserted that they are ‘semantically empty but context dependent’ (p. 190). Although the term ‘semantically empty’ appears several times throughout the analysis, in her conclusion, she softens her position, stating that ideophones’ meanings are ‘often dependent’ on their contexts (p. 210). Similarly, de Schryver (Reference de Schryver2009) has characterized ideophone semantics as ‘highly elusive’ (p. 34).

The research for this article is motivated by the assumption that there is some multisensorial representation in the brain, however minimal, which guides speakers’ conceptualization of ideophones and which is independent of their contextual circumstances. We hypothesized therefore that correlations likely exist between posited semantic features of ideophones from the Pastaza and Upper Napo Kichwa varieties spoken in eastern Amazonian Ecuador (iso codes: qvo and quw) and subjects’ neural reactions to auditory clips of ideophones isolated from sentence contexts.

The remainder of the introduction describes previous work on neuroimaging ideophones’ semantics and specifies the nature of the ideophones investigated in the current study. Sections 2 and 3 outline our data collection methods and results. Section 4 discusses our results and their implications.

1.1. Neuroimaging and ideophones

Propositions regarding perceptions of direct or indirect connections between linguistic forms such as sounds or words and meaning (i.e., iconicity), as opposed to arbitrariness, have been of ‘fast growing interest’ in linguistics, but Lockwood and Dingemanse, in their Reference Lockwood and Dingemanse2015 review of iconicity research, note that ‘there is still a relative dearth of experimental research on sound symbolism’ (p. 2). On top of this dearth, most studies of sound symbolism focus on phonemes rather than on lexical items or ideophones. In a study of sound symbolism involving ideophones in five languages, Dingemanse et al. (Reference Dingemanse, Schuerman, Reinisch, Tufvesson and Mitterer2016) have concluded that, although individual sound segments supported the guessing of word meaning, sound ‘segments and prosody work together as cues supporting iconic interpretations. The findings cast doubt on attempts to ascribe iconic meanings to segments alone and support a view of ideophones as words that combine arbitrariness and iconicity’ (p. e117). Onomatopoeia, which are ideophones that mimic sound events (e.g., animal or human cries), have been studied at the neurological level more than other ideophone types, and the research has regularly supported semanticity and sound symbolism. For example, Peeters (Reference Peeters, Papafragou, Grodner, Mirman and Trueswell2016) has investigated the neural processing of onomatopoeic versus arbitrary words in Dutch by measuring participants’ N400 response. N400 is a negative-going event-related potential (ERP) component peaking around 300–500 ms after the presentation of a word. It is sensitive to lexical retrieval difficulty, which can be affected by factors such as the degree to which the target word’s semantics match the preceding context (Kutas & Federmeier, Reference Kutas and Federmeier2011). Peeters’s results showed that the onomatopoeia were processed as quickly and accurately as arbitrary words in the behavioral results, but that onomatopoeic words elicited a reduced N400 response compared to arbitrary words. Peeters explains these results as indicating ‘facilitated access to the lexicon … in case of an iconic mapping between form and meaning’ (p. 1640). Egashira et al. (Reference Egashira, Choi, Motoi, Nishimura and Watanuki2015) have found a similar effect in Japanese, with a reduced N400 for onomatopoeia in comparison to control words.

In contrast, Vigliocco et al. (Reference Vigliocco, Zhang, Del Maschio, Todd and Tuomainen2019) have found an increased N400 for onomatopoeia compared to arbitrary words in English. In their study, participants were presented with pairs of words that were either semantically related or unrelated. Critically they compared pairs where the second word was onomatopoeic to pairs with arbitrary words. They found that within semantically unrelated word pairs, the condition most similar to Egashira et al. (Reference Egashira, Choi, Motoi, Nishimura and Watanuki2015) and Peeters (Reference Peeters, Papafragou, Grodner, Mirman and Trueswell2016), onomatopoeic words elicited an increased N400 response compared to arbitrary words. Interestingly, although this finding contrasts with those of Peeters and of Egashira et al., Vigliocco et al. have argued that this was due to enhanced semantic activation for the onomatopoeic words, potentially due to a richer semantic representation than with arbitrary words. Therefore, previous research has argued that the processing of onomatopoeic words involves either facilitated access to the lexicon due to the iconic mapping between form and meaning, resulting in a reduced N400 (Egashira et al., Reference Egashira, Choi, Motoi, Nishimura and Watanuki2015; Peeters, Reference Peeters, Papafragou, Grodner, Mirman and Trueswell2016), or richer semantic representations and therefore enhanced semantic activation, leading to an enhanced N400 (Peeters, Reference Peeters, Papafragou, Grodner, Mirman and Trueswell2016). Differences in the results of these studies may be due at least in part to the languages under investigation or the different tasks used. Importantly, however, both findings suggest that the processing of onomatopoeia does involve accessing semantic representations, which provides evidence for their semanticity.

Further evidence on the neural processing of onomatopoeia comes from Han et al. (Reference Han, Choi, Chang, Jeong and Nam2005a), who presented Korean-speaking participants with sets of words, onomatopoeia, and non-words in a lexical decision task. Using functional magnetic resonance imaging (fMRI), they found that the onomatopoetic stimuli resulted in significant bilateral activation of the superior temporal cortex, the inferior frontal gyrus, and the fusiform gyrus. The authors argued that this was due to the involvement of cognitive processes related to interpreting the onomatopoeia: the superior temporal cortex is related to sound processing, the inferior frontal gyrus is involved in word recognition, and the fusiform gyrus in facial recognition. Activation in each of these areas has the potential to be connected with sound mimicry or the biological entities being mimicked in onomatopoeia. Similarly, Röders et al. (Reference Röders, Klepp, Schnitzler, Biermann-Ruben and Niccolai2022) have determined that onomatopoeic words engage the auditory cortex more than non-onomatopoetic words using magnetoencephalography (MEG). Hashimoto et al. (Reference Hashimoto, Usui, Taira, Nose, Haji and Kojima2006) have determined that onomatopoeia related to animal sounds triggered activity bilaterally in the transverse temporal gyrus, roughly overlapping with the auditory cortex. These studies therefore provide further evidence that onomatopoeic words evoke meaning in ways that may overlap but also go beyond simple word recognition.

However, onomatopoeia is only one category of ideophones. In his cross-linguistic survey of ideophone research, Dingemanse (Reference Dingemanse2012) notes, ‘ideophone systems extend a bit beyond onomatopoeia by also including depictions of movement, often combined with sound’. They can also ‘cover an even wider range of sensory imagery, including … visual patterns, shapes, tastes, textures, inner feelings, and so on’ (p. 663). Neurological studies covering these other ideophone varieties are few in number. In one such study, Han et al. (Reference Han, Choi, Chang, Jeong, Nam, Wang, Chen and Ong2005b) have examined neural activity for onomatopoeic and phainomime (i.e., phenomine) words. The latter, common in Korean, entail physical movement, action, or appearance. Phainomime words generated activity greater than that of onomatopoeia in the bilateral cerebellum and the left supplementary motor area (SMA), supporting a connection with motion conceptualization. Osaka has found that ideophones related to walking (Osaka, Reference Osaka2009), laughing and crying (Osaka, Reference Osaka2011), and gazing (Osaka & Osaka, Reference Osaka and Osaka2009) all activated corresponding areas of the brain, and Aryani et al. (Reference Aryani, Hsu and Jacobs2019) have discovered that German affective ideophones activated the left amygdala, associated with language-activated emotional processing.

Other than Han et al. (Reference Han, Choi, Chang, Jeong, Nam, Wang, Chen and Ong2005b), research comparing neurological responses for several ideophone types has yet to be conducted. Furthermore, studies comparing ideophones with actual non-ideophonic words, rather than pseudowords, are rare. The current study therefore compares responses across ideophone types and also between ideophonic and non-ideophonic words.

1.2. Sensori-semantic map of Pastaza Kichwa ideophones

Archived, audiovisual data consisting of over 500 ideophone tokens have been documented over the past 10 years from two closely related dialects – the Pastaza Kichwa (PK) and Upper Napo Kichwa (NK) dialects of Amazonian Ecuador (see http://quechuarealwords.byu.edu). Examples are drawn from many different contexts, including informal conversations, classroom elicitation sessions, personal experiences, and traditional narratives.

Based on observations of these archived examples, Nuckolls (Reference Nuckolls, Akita and Pardeshi2019) has analyzed ideophones’ semantic properties with 10 sensory categories. These categories consist of three overarching or ‘supercategories’ as well as seven subcategories that are entailed by the supercategories. Figure 1 outlines how these categories are interrelated.

Figure 1. Sensori-semantic map of Kichwa ideophones.

Nuckolls (Reference Nuckolls, Akita and Pardeshi2019) analyzed ideophone semantics by observing their use, including any gestural or intonationally special effects, in relation to their sentential contexts. Ideophones were labeled with sensory features based on inferences about the perceptual experiences available to speakers. VISUAL ideophones analyzed as depictive of color and pattern are evident through the visual senses, just as configurational and haptic perceptions are evident when motion is perceived. Emotive and cognitive messages are evident when a sound is heard. Regarding the VISUAL subcategories, patterns include linearity, distributedness throughout a space, and curvedness. Colors and patterns are often ideophonically depicted as a visual expanse.

Concerning MOTION subcategories, configurationality is relevant when an ideophone depicts motion that has a discernible shape or outline, or comes to rest in a distinctive shape. Haptic ideophones involve contact between any types of surfaces, whether mediated through skin or not. Proprioceptive ideophones are depictive of an inner sensation of pressure, texture, or temperature. They are a subcategory of haptic because such perceptions are typically described as experiences of motion that press upon, touch, pierce, or move within one’s body.

Sound ideophones depict any kind of sound-making, often produced through a vocal tract, whether by humans or non-humans. Sound ideophones may also result from sounds created through stridulation, which involves rubbing body parts together by some insects such as grasshoppers. Among SOUND ideophones, those categorized with the cognition subcategory are said by Kichwa speakers to express a thought, idea, or warning by means of an ideophone. Speakers will sometimes supply a human-language utterance that is said to ‘translate’ such an ideophone’s meaning. Typically, such sounds will be said to express warnings or notifications that humans are thought to benefit from – warnings and notifications that might elicit emotions such as fear or apprehension, or cognition such as goal-directed planning or acting. For example, different ideophones depictive of a squirrel cuckoo bird’s varied sounds will be said to mean either that a hunter is moving in an optimal location to catch game (i.e., coming close to goal accomplishment), by uttering chik, meaning ‘you are on the right track’, or, alternatively, the same bird may be depicted as uttering chikwan, meaning ‘you are going in the wrong direction’. Ideophonic sounds encoded with emotion may be said to express/encode joy, sadness, or anger and are also typically depictions of non-human sounds, especially those of birds. Our approach considers imitations of sound arising from a vocal tract, whether of a human or a non-human, to be examples of pure sound ideophones. Other kinds of sounds, such as the sound of an object hitting the ground, would be categorized in our schema as SOUND + MOTION – Haptic, however, because the motion of the object results in contact with the ground, thus creating a sound.

1.3. Evidence for ideophones’ semantic features from verbs and gesture types

In addition to being characterized based on inferences about the perceptual evidence available to speakers using them, ideophones’ sensory features have also been analyzed based on inferences from the sentential contexts of their use. If an ideophone is used with a verb, the verb itself will often implicate sensory features. The verbs uyarina ‘to sound’, wakana ‘to cry’, kantana ‘to sing’, and ronkana ‘to snore’, for example, obviously involve the supercategory of SOUND. The verb rikurina ‘to appear’ encodes the VISUAL supercategory. The verb tsagmana ‘to hit’ is a MOTION verb and also entails the subcategory Haptic because the action of hitting involves motion that makes contact with a surface.

In some usages, however, an ideophone will occur with a verb that is ‘non-committal’ with respect to sensory features. Examples of such verbs are ‘to be’ or ‘to become’. And in some instances, an ideophone may occur with no verb whatsoever. Whether an ideophone occurs with or without a verb, gestures and their patterns of occurrence and even non-occurrence with ideophones can assist in interpreting their semantics. Nuckolls (Reference Nuckolls2021) has found that ideophones depictive of visuality, for example, are often used with bounding gestures. Bounding gestures help express a speaker’s message by depicting the expanse of a visual perception, often by sweeping through the air with hands and arms. Figure 2 illustrates a speaker’s use of a bounding gesture accompanying the ideophone chem (tɕem) to depict the appearance of a tree filled with fruit. The speaker depicts the expansiveness of the fruit with her sweeping motion.

Figure 2. Bounding gesture accompanying ideophone ‘tɕem’.

Note: The Kichwa utterance and translation are as follows:

One possible explanation for such uses of bounding gestures is that the visual field is often apprehended by scanning one’s eyes across a fixed area of space. As physical acts, then, bounding gestures might be imitating this embodied understanding of visuality as a scanning process.

By contrast with VISUAL ideophones, those which are exclusively depictive of sound tend to be used without gestures (Nuckolls, Reference Nuckolls2021). Ideophones’ gestural impoverishment with SOUND-only ideophones may, within the animistic context of Amazonian Kichwa culture, be motivated by their status as speech reports from the natural world. Speech reports, or quoted speech fragments, are predicted to occur without gestures in the sketch model of gesture formulated by de Ruiter (Reference de Ruiter and McNeill2000).

Ideophones depictive of motion have the greatest semantic complexity, which is congenial with their greater sensorial complexity, by contrast with vision and sound. What is visually seen or auditorily heard can be apprehended without additional sensory stimuli or faculties. A sound can be heard even when its source is not visually perceived. Similarly, a visual image is seeable without any movement or sound. Whatever is perceived to be in motion, by contrast, is either simultaneously seen or heard. Therefore, ideophones expressive of and categorized as MOTION are also encoded for VISUAL or SOUND sensory categories, and sometimes both. The Kichwa ideophone bhux, for example, depicts a bursting-out from underwater of a freshwater dolphin – a complex depiction involving visuality, movement, and sound. The ideophone pusking, by contrast, depicts a slumping movement, which involves visuality as well as movement, but no sound. In keeping with their greater semantic complexity, MOTION ideophones are accompanied by the greatest varieties of iconic gestures (Nuckolls, Reference Nuckolls2021).

1.4. Description of four ideophone categories tested

Having discussed how the sensori-semantic categories may be justified through inferences about perceptual, lexical, and gestural evidence, we turn now to a discussion of the four categories of ideophones we examined in our experiment. Ideophones in each category were selected based on the shared presence or absence of VISUAL, SOUND, or MOTION features. Category 1 consists of 27 audio clips of ideophones depictive of visual imagery of color and pattern. Included in this category are eight ordinary words for visual phenomena that are used ideophonically. Ideophonized words receive a special suffix, either -ng, which tends to be used with words for actions or ongoing activities, or -xlla, which tends to be used for qualities. Whether an ordinary word is ideophonized with -ng or -xlla, its foregrounded, performative pronunciation and accompanying gestures indicate its ideophonic status. For example, when the word ñambi ‘path’ is used ideophonically, it becomes ñambing and may be used to depict the trace of a path visually evident after a large, heavy snake passed by, leaving its impression on the weeds it crushed while moving. Category 1 also includes color words used ideophonically.

Category 2 consists of 28 ideophones depictive of visual phenomena that also involve motion in distinctive visual configurations. For example, the ideophone pital depicts legs dangling in midair. The ideophone talaw depicts the collapsing movement of a body as it falls. The ideophone aki depicts a swaying back-and-forth movement.

Category 3 consists of 36 ideophones depictive of sounds, almost all of which are made by non-humans. Many sound ideophones are depictions of bird sounds, but there are also sounds of frogs, forest pigs, sloths, atmospheric phenomena such as thunder, and even sounds said to be made by forest spirits. A number of these sounds are said by speakers to communicate emotions such as sadness, joy, or anger. Some are categorized as cognitive because they are said to communicate messages such as warnings for humans.

Category 4 consists of 14 ideophones depictive of movements making auditorily evident contact with surfaces. Examples of such ideophones include taras, depicting a sound made by an entity moving and brushing against foliage. Other examples include potong, a sound heard when something falls and hits the ground, and tsapak, a sound and movement of waves of water slapping against each other when a motorized boat goes by.

To summarize, our four categories represent a range of sensory distinctions. Category 1 ideophones are purely VISUAL, including the subcategories of color and pattern. Category 2 ideophones depict visuality and movement (VISUAL + MOTION), including the subcategory of configuration. Category 3 ideophones are thought to be purely depictive of sounds (SOUND), including subcategories of emotion and cognition. Category 4 ideophones depict sounds and movements that make contact with a surface (SOUND + MOTION), including the haptic subcategory. Categories 1 and 3 are relatively simple as they each involve only one supercategory, either VISUAL or SOUND. Categories 2 and 4, by contrast, add the MOTION supercategory to visuality and to sound. We hypothesized that the brain areas activated in our experiment will reflect the meaning associated with the four different categories. We expected to find a difference between the responses to VISUAL- (categories 1 and 2) versus SOUND-related ideophones (Categories 3 and 4), and between MOTION (2 and 4) and non-MOTION (1 and 3) ideophones. Although categories 1 and 2 both convey visual information, category 1 ideophones were hypothesized to evoke activation in brain regions related to pattern or color information that is not present in category 2. Similarly, although categories 3 and 4 both convey sound information, category 3, but not 4, was hypothesized to evoke brain activation related to cognitive or emotional information. Figure 3 depicts the four categories with the subcategories we hypothesized to fit.

Figure 3. Four ideophone stimuli categories for current study aligned with features from the sensori-semantic map of Kichwa ideophones in Figure 1.

2. Methodology

2.1. Participants

Seventeen Kichwa speakers completed the experiment at the Iyarina field school in the Ecuadorian Amazon (https://www.iyarina.org/). All gave verbal informed consent and were compensated for their time.\

2.2. Stimuli

Stimuli consisted of audio clips of 105 ideophones, of which 27 were thought to be primarily VISUAL in meaning, 28 to be VISUAL + MOTION, 36 to have meanings related primarily to SOUND, and 14 to be SOUND + MOTION. In addition, we presented a total of 20 non-ideophonic words (nouns and verbs), five from each of the same semantic categories as the ideophones. For example, the non-ideophonic word wakana ‘to cry’ is an example for the SOUND category. Kaspi ‘branch, stick’ is for the VISUAL category. Tsagmana ‘to shove’ is for SOUND, MOTION, Haptic, and muyurina ‘to circle’ is for the VISUAL, MOTION, Configurational category. Our ideophone stimuli were decontextualized in two respects: they were removed from their sentential contexts, and the visual aspects of their production, specifically any gestures or facial expressions, were eliminated. The list of full stimuli along with definitions and audio files for each stimulus can be found in the Open Science Repository (https://osf.io/fqbzp/).

2.3. Procedure

Stimuli were presented while participants’ neural responses were recorded using a portable functional near-infrared spectroscopy (fNIRS) system. In fNIRS imaging, infrared light is sent from a source optode into the scalp and is absorbed or refracted. Because oxygenated and deoxygenated hemoglobin absorb different amounts of light at different wavelengths, changes in oxygenated and deoxygenated hemoglobin can be detected based on the amount of light refracted back to a neighboring light detector. This allows for an estimation of blood flow (and thus neural activity) between the source and the detector. A path between a single source and a single detector is called a channel. Details of our recording are given below.

In fNIRS research, stimuli have traditionally been presented in blocks, with like stimuli being presented one after the other to accumulate a hemodynamic signal over the block. However, in our study, we utilized an event-related design, presenting single ideophones in random order and like stimuli being dispersed accordingly. This pattern is becoming more common in fNIRS research (cf. Heilbronner & Munte, Reference Heilbronner and Munte2013; Hitomi et al., Reference Hitomi, Gerrits and Hartsuiker2021) and is largely based on a similar long-standing trend in fMRI research where such a design has been chosen for decades due to ‘eliminating potential confounds, such as habituation, anticipation, set, or other strategy effects’ (Dale, Reference Dale1999, p. 109). Predictability of category, order, and so forth can be avoided via event-related designs, where both the timing and order of events presented are randomized and where the accumulation of hemodynamic response to the point of saturation is less likely. Furthermore, the use of event-related designs may allow for better comparison with EEG and fMRI research (Li et al., Reference Li, Zhao, Wang, Wang and Zhang2020).

Our audio stimuli were presented with a random interval between each (similar to descriptions of ‘jittering’ in Heilbronner & Munte, Reference Heilbronner and Munte2013; Huppert et al., Reference Huppert, Diamond, Franceschini and Boas2009) varying between 2 and 8 s (M = 5.0 s, SD = 1.7 s). While a 4 s minimal interval between stimuli has been recommended (Huppert et al., Reference Huppert, Diamond, Franceschini and Boas2009), some have used jitters ranging between 1 and 2.5 s (Heilbronner & Munte, Reference Heilbronner and Munte2013). We opted for a shorter jitter time for practical reasons – to reduce participant discomfort by minimizing scanning time and thereby also maximize the number of participants we could include during our team’s short stay in Ecuador. Research suggests that general linear modeling (GLM) would allow us to sufficiently capture stimuli-related hemodynamic response functions (HRF) despite the proximity of events (Plichta et al., Reference Plichta, Heinzel, Ehlis, Pauli and Fallgatter2007), and that it would allow for greater power and efficiency (Heilbronner & Munte, Reference Heilbronner and Munte2013).

Stimuli were delivered using Psychopy (Peirce et al., Reference Peirce, Gray, Simpson, MacAskill, Höchenberger, Sogo, Kastman and Lindeløv2019). After initial instructions, participants listened to the 20 non-ideophones in a single block. Participants were then presented with the 105 randomly ordered ideophones, split into two blocks with an opportunity to rest between each block. For each item, participants heard a 200 ms beep, followed by 800 ms of silence and then the audio for the word/ideophone. Triggers were sent to the NIRSport2 units at the onset of each word/ideophone to mark events. The average audio stimulus length was 1,600 ms (MIN = 300 ms, MAX = 5000 ms, SD = 1100 ms). After four of the non-ideophonic words and after 24 of the ideophones, participants were presented with a recorded question in Kichwa, such as ‘Have you heard that before?’ or ‘What does that make you think of?’ to encourage attention. The average total run time of the experiment, not including cap preparation, was 22 min (SD = 3 min). The experimental procedure is illustrated in Figure 4.

Figure 4. Experimental procedure depiction.

Note: (a) Procedure for the entire experiment. (b) Presentation parameters for a single trial.

2.4. fNIRS data acquisition and analysis

fNIRS hemodynamic data were collected using portable NIRSport2 units (NIRx International, https://nirx.net/) operating a 31-source × 27-detector montage with 16 short-channel detectors to capture scalp and subdural noise, creating 99 channels total, 83 being regular and 16 being short channel (see Figure 5).

Figure 5. fNIRS montage (optode configuration).

Note: Sources are in red and marked with ‘S’ and detectors in blue and marked with ‘D’. Locations are indicated using the international 10–20 EEG system. Blue circles indicate short channel detectors, and solid blue lines indicate channels between sources and detectors.

fNIRS caps were fitted using the international 10–20 system (Acharya et al., Reference Acharya, Hani, Cheek, Thirumala and Tsuchida2016) and located on participants with the Cz mark on the cap at midpoint between the nasion and inion and centered between the left and right ears. Signal quality was checked for each participant using Aurora (https://nirx.net/software), and caps were adjusted to meet the coefficient of variation (CV) standard >7.5%. Figure 6 depicts a participant being fitted with a cap at the experiment site.

Figure 6. Image depicting fNIRS cap placement and signal quality check process.

Using Satori (version 1.8.2) to preprocess fNIRS signals (https://nirx.net/satori), we employed a more generous CV of 15.0% to prune channels, regress out motion artifacts and physiological noise via accelerometer and short channel data, and implement a bandpass filter of 0.01–0.4 Hz. Spikes were removed using a 10-iteration monotonic model with a threshold of 3.5 Hz, and data normalization was carried out through z-transformation.

We used Satori GLM modeling to assess differences between the various classes of ideophones in hemodynamic response, comparing activity during VISUAL versus SOUND ideophones, MOTION versus non-MOTION ideophones, and pure VISUAL and SOUND with their +MOTION counterparts. In other words, activity during VISUAL and VISUAL + MOTION ideophones was contrasted with SOUND and SOUND + MOTION ideophones, and activity during VISUAL + MOTION and SOUND + MOTION ideophones was compared with pure VISUAL and SOUND ideophones. For each fNIRS channel, we conducted t-tests to measure differences between conditions and create visual representations (i.e., t-maps) for each of these comparisons. Significance level was established using the false discover rate (FDR) of q = 0.05. FDR is commonly used in fMRI voxel-by-voxel testing and in fMRI meta-analyses to allow for as many significant comparisons as possible while maintaining a low false positive (Genovese et al., Reference Genovese, Lazar and Nichols2002).

2.5. Neural areas of interest

To determine whether each ideophone supercategory engaged areas of the brain corresponding with relevant theoretical constructs, we pinpointed brain regions typically activated during vision, movement, and sound. Because we had a limited number of fNIRS optodes, we did not include many areas used in early visual or tactile perception, such as those receiving initial neural input from the eyes or hands, given that stimuli were all auditory. Although some areas associated with actual bodily movement (as opposed to imagined movement) were included in our fNIRS imaging, this was largely because of their overlap with imagined motion, a type of cognitive activity thought to be prompted by some of our ideophones and in line with theories of embodiment and grounded cognition (Barsalou, Reference Barsalou2008), which have been cited in relation to the sensory nature of ideophones. After identifying broader neural areas associated with visual imagery, motion, and sound interpretation, we added subregions associated with each of the subcategories, such as color and pattern, configuration, emotion, and cognition.

2.5.1. Visual areas

Areas of interest hypothesized to show greater activity for VISUAL ideophones are shown in red in Figure 7. Because we will regularly use similar images to depict areas of interest and neuroimaging results, we note here that we will always present both the left and right sides of the brain cortex (placed in our images correspondingly) as if taken in a side profile view. We will use color both to differentiate areas of interest based on ideophone classifications and to contrast activity for one ideophone category with another. The primary brain area typically associated with visual representation is the fusiform gyrus (Price et al., Reference Price, Noppeney, Phillips and Devlin2003). Posterior regions of this gyrus are regularly associated with shape, place, facial, and word recognition. Color representation is also largely evident bilaterally (i.e., on both sides of the brain) and posteriorly in the fusiform gyrus (Simmons et al., Reference Simmons, Ramjee, Beauchamp, McRae, Martin and Barsalou2007), but color-related imagery is regularly associated with the right inferior parietal lobe as well (Wang et al., Reference Wang, Han, He, Caramazza, Song and Bi2013). The middle occipital gyrus is also sometimes associated with visual imagery (Spagna et al., Reference Spagna, Hajhajate, Liu and Bartolomeo2021).

Figure 7. Areas of interest for VISUAL and SOUND ideophones.

Note: Areas for VISUAL ideophones are marked in red and SOUND ideophones in blue. Dotted lines indicate areas that are also hypothesized to show greater activity for purely VISUAL/SOUND ideophones than for VISUAL + MOTION / SOUND + MOTION ideophones.

Neural activity associated with pattern recognition is slightly more ambiguous than with color. Commenting on the ideophone subcategory of pattern, Nuckolls (Reference Nuckolls, Akita and Pardeshi2019) notes, ‘Patterns can be of various kinds, including linear, distributed throughout a space, or sporadically at intervals, mottled, splotched, or curved’ (p. 172). Furthermore, ‘Pattern is included with visual because whatever is patterned often stands out from its surroundings like a figure against a ground’ (p. 174). Although much visual pattern recognition begins in the occipital lobe as input from the eyes is processed, we excluded this activity because our stimuli were auditory. Slightly more progressed pattern recognition such as distinguishing objects and separating them out from their environment occurs largely bilaterally in the fusiform gyrus (Ghanavati et al., Reference Ghanavati, Salehinejad, Nejati and Nitsche2019; Weiner & Zilles, Reference Weiner and Zilles2016). The right posterior parietal cortex (rt PPC) and left dorsolateral prefrontal cortex (left dlPFC) are also employed in visual discrimination akin to pattern recognition (Ghanavati et al., Reference Ghanavati, Salehinejad, Nejati and Nitsche2019). We hypothesized that areas related to shape and pattern recognition would show up even more strongly for the purely VISUAL ideophones than for the VISUAL + MOTION ideophones, since they are hypothesized to specifically encode color and pattern.

Again, we acknowledge that there are other areas of the brain associated with visual activity. We focused only on areas we could image using our fNIRS setup sufficiently to make comparisons between ideophone subcategories and to connect with previous neuroimaging research.

2.5.2. Sound areas

Areas of interest hypothesized to show greater activity for SOUND ideophones are shown in blue in Figure 7. Although the auditory cortex (Broadman’s areas 41 and 42 in the superior temporal gyrus) is involved in the early processing of sound input from the ears, the assignment of meaning and other processes that follow the initial perception of sounds involves other areas of the brain. The area most commonly associated with moving from sound to meaning is Wernicke’s area (posterior portions of Broadman’s area 22 in the superior temporal gyrus), traditionally thought to be associated with the comprehension of language, but more recently thought to be implicated simply in phonological representation (Binder, Reference Binder2017). The processing of sounds independent of lexical access might be represented in this area, making it of interest in our study. Participants might also process the ideophones in ways that result in cognition, emotion, or conceptual representation outside of typical paths involved in accessing word form and meaning (e.g., without seeing a word, sounding it out, accessing its meaning and morphological information via Wernicke’s and Broca’s areas, and then eventually managing conceptual representation in the brain). The prefrontal cortex (PFC) is the one cortical area regularly associated with cognition and emotion, especially subareas such as the frontopolar, orbitofrontal, or left inferior prefrontal cortices. The visually active fusiform gyrus is also often connected with emotion, typically in studies of facial emotional expression and food–emotion connections (Boggio et al., Reference Boggio, Wingenbach, da Silveira Coêlho, Comfort, Murrins Marques and Alves2023).

2.5.3. Movement areas

The areas of interest hypothesized to show greater activity for MOTION ideophones are shown in Figure 8. Numerous areas of the brain are involved in imagining movement (i.e., physical motor activity). The premotor area, supplementary motor area (SMA), and primary motor cortex (M1) are regularly involved in both imagining and carrying out motor activity (Shibasaki, Reference Shibasaki2012). In terms of neurological activity that could be related to the VISUAL + MOTION ideophone subcategory, both the configuration/shape depicted by the sound and the corresponding manual action typically accompanying that ideophone could be important. The hand motor activity is apt to be apparently high in the left primary motor cortex, where majority right-handed participants were likely to envision themselves or others producing the manual configuration. In addition, the right precentral gyrus is sometimes associated with visual imagery (Spagna et al., Reference Spagna, Hajhajate, Liu and Bartolomeo2021) and might be particularly important for the visualization of movement, since it is involved in both oculomotor and somatomotor space coding (Iacoboni et al., Reference Iacoboni, Woods, Lenzi and Mazziotta1997). In terms of actual configuration/shape representation, again the fusiform gyrus is typically active (Starrfelt & Gerlach, Reference Starrfelt and Gerlach2007); however, since this area is also activated with visual processing unrelated to motion, it is unclear whether it would show greater activation for MOTION ideophones than non-MOTION ideophones. A final area we included is the extrastriate body area (lateral occipitotemporal cortex). This area is responsive to the perception of body parts and their shapes (Kitada et al., Reference Kitada, Yoshihara, Sasaki, Hashiguchi, Kochiyama and Sadato2014). We imaged this area only partially, given our hardware limitations.

Figure 8. Areas of interest for MOTION ideophones.

Note: Areas circled in red typically show greater activity during perception, visualization, or performance of motor activity.

2.5.4. Ideophones versus non-ideophonic words

In addition to comparing brain activation in response to the different categories of ideophones, we also examined differences in response to ideophonic versus non-ideophonic words. Although many areas of the brain have been shown to be active during language processing and representation of concepts and features related to language, the two areas most typically associated with language processing are Broca’s and Wernicke’s areas in the left hemisphere. We focus our analysis on these two areas. Previous research has found greater activation in right-hemisphere regions analogous to Wernicke’s area, especially the superior temporal sulcus, in response to ideophonic than non-ideophonic words (Kanero et al., Reference Kanero, Imai, Okuda, Okada and Matsuda2014). In addition, the right inferior frontal gyrus (IFG) can be involved in motion, visual processing, emotion, and ambiguity, but its connection to language processing is not as obvious as the left IFG encompassing Broca’s area (Hartwigsen et al., Reference Hartwigsen, Neef, Camilleri, Margulies and Eickhoff2019). These areas of interest for the comparison between words and ideophones are illustrated in Figure 9.

Figure 9. Cortical areas of interest with hypothesized differences in neural activity between ideophonic and non-ideophonic words.

Note: Areas of interest hypothesized to show greater activity for ideophones than non-ideophonic words (red). Homologous regions in the left hemisphere were included for comparison (yellow).

3. Results

Channels showing significant differences in shifts in hemoglobin values for various categories of stimuli are described below. Supplementary Table 1 lists the areas of the cortex imaged with fNIRS that are regularly activated with the visual, motor, and sound processing. It indicates fNIRS channels showing significantly greater or significantly less activation in the visual, motor, and sound areas scanned for the comparisons of interest, and it provides values for the t-tests carried out. We report only oxygenated hemoglobin results because comparisons for deoxygenated and total hemoglobin did not vary from oxygenated results in ways that would substantially change our reporting on areas of interest or conclusions. The figures provided are 3D maps created using channel t-values to depict regions where significant differences in activity were present.

3.1. VISUAL versus SOUND ideophones

The results of comparison between VISUAL and SOUND ideophones are given in Figure 10. All figures, including brain images depicting results, were generated using Satori channel mapping features and based on t-tests.

Figure 10. Brain areas showing greater activation for VISUAL or SOUND ideophones.

Note: Red indicates greater activation for VISUAL ideophones than SOUND ideophones, and blue greater activation for SOUND than VISUAL. Darker color indicates a higher t-value in the GLM model. Areas of interest hypothesized to show greater activity for VISUAL or SOUND ideophones are also marked with red and blue lines, respectively.

Of the 29 unique channels covering areas largely involved in visual imagery, 16 showed significant positive differences in oxygenated hemoglobin between visually categorized ideophones and SOUND ideophones; 9 showed significantly lower levels of oxygenation for VISUAL than SOUND ideophones; 4 showed no significant differences. The posterior regions of the fusiform gyrus tended to show greater activity for VISUAL ideophones, but the anterior regions, where sound–meaning mapping can also be present, show less. The left inferior parietal and right precentral gyri showed greater activity for VISUAL ideophones in every channel but one (S7-D24 in Figure 5), a vertical channel that extends up over Wernicke’s area and well into the superior temporal lobe, where sound-related activity could readily exceed visual. In short, channels clearly covering mostly areas associated with visual processing showed greater neural activity for VISUAL than SOUND ideophones.

For SOUND-categorized ideophones, only one channel over Wernicke’s area (left and right) yielded a significant and greater positive difference than for VISUAL ideophones – a channel on the right side. Eight yielded evidence of greater oxygenation for VISUAL ideophones, suggesting more neural activity in these obviously sound-associated areas for VISUAL ideophones than for SOUND.

3.2. Pure VISUAL versus VISUAL + MOTION ideophones

The results of comparison between VISUAL and VISUAL + MOTION ideophones are given in Figure 11. Of the 20 motion-related channels, 3 showed greater activity for VISUAL than VISUAL + MOTION ideophones and 4 showed more for VISUAL + MOTION than VISUAL. While the three where VISUAL was more prominent were distributed throughout the motion areas, the four that were more prominent for VISUAL + MOTION than VISUAL ideophones were over the supramarginal gyrus, an area associated with spatial processing, motor control, and recognition of the movements of others (Ben-Shabat et al., Reference Ben-Shabat, Matyas, Pell, Brodtmann and Carey2015).

Figure 11. Brain areas showing greater activation for pure VISUAL or VISUAL + MOTION ideophones.

Note: Red indicates greater activation for VISUAL ideophones than VISUAL + MOTION ideophones, and blue greater activation for VISUAL + MOTION than VISUAL. Darker color indicates a higher t-value in the GLM model. Areas of interest hypothesized to show greater activity for +MOTION are surrounded with blue lines, and more purely VISUAL areas are surrounded with red lines.

3.3. Pure SOUND versus SOUND + MOTION ideophones

The t-map for the comparison between SOUND and SOUND + MOTION ideophones is in Figure 12. Of the 21 channels placed over emotion-related areas of the brain, SOUND + MOTION ideophones yielded significantly more evidence of neural activity in 16 than pure SOUND ideophones. Of the 16, four were over the fusiform gyrus, suggesting possible visualization of emotion (e.g., emotion appearing in facial expressions). This was contrary to our hypothesis connecting emotion with pure SOUND ideophones.

Figure 12. Brain areas showing greater activation for pure SOUND or SOUND + MOTION ideophones.

Note: Red indicates greater activation for SOUND ideophones than SOUND + MOTION ideophones, and blue greater activation for SOUND + MOTION than SOUND. Darker color indicates a higher t-value in the GLM model. Areas of interest hypothesized to show greater activity for +MOTION are surrounded with blue lines, and areas associated with cognition and emotion are surrounded with red lines.

For the 40 channels broadly related to cognition (overlapping with emotion in some cases), 29 showed significantly more oxygenation for SOUND + MOTION ideophones than for pure SOUND. Six others showed substantially and significantly more oxygenation for pure SOUND ideophones than SOUND + MOTION. These channels cover areas associated with higher-level cognition: decision-making (CH 04), emotion regulation (CH 60), goal-directed behavior (CH 03, CH 04), and attention control and working memory (CH 63, CH 80).

In short, there was evidence of greater neural activity in more channels associated with cognition and emotion for SOUND + MOTION ideophones than for pure SOUND ideophones, contrary to our hypothesized model. However, several areas involving key aspects of cognition and emotion recognition and regulation yielded higher indications of neural activity for SOUND ideophones than SOUND + MOTION, following our model.

Regarding motion-related neural activity for the SOUND + MOTION ideophones, there was clearly more oxygenation generally in motion-related areas than for pure SOUND ideophones, with 19 out of 23 channels showing higher values than for SOUND.

3.4. Motion ideophones

The results of comparison between MOTION and non-MOTION ideophones are depicted in Figure 13. For the 12 channels covering large portions of areas related to motion (mostly those normally labeled motor areas), all but two yielded significantly greater oxygenated hemoglobin values than non-MOTION ideophones, and no channels yielded significantly negative t-values. Even channels over the fusiform gyrus with potential to capture visual aspects of motion yielded significantly greater values in seven out of eight cases, and channels over areas associated specifically with haptic information yielded significantly greater values in every case. In terms of channels related strictly to imaging configurational motion, seven out of the eight channels showed significantly greater activity for MOTION than for non-MOTION ideophones and no channel showed greater activity for non-MOTION ideophones than for MOTION. Bodily motion considered haptic, which also entails sound, often elicits activity in the right supramarginal gyrus or SMG (Ben-Shabat et al., Reference Ben-Shabat, Matyas, Pell, Brodtmann and Carey2015), and there was evidence of significantly greater activity in all three channels directly over this area.

Figure 13. Brain areas showing greater activation for MOTION or non-MOTION ideophones.

Note: Red indicates greater activation for MOTION ideophones than non-MOTION, and blue greater activity for non-MOTION ideophones than MOTION. Darker color indicates a higher t-value in the GLM model. Areas of interest hypothesized to show greater activity for MOTION ideophones are surrounded with red lines.

3.5. Ideophonic versus non-ideophonic words

The results of comparison between ideophones and non-ideophonic words are given in Figure 14. For all six channels covering substantial portions of left Broca’s area, differences between responses for ideophonic and non-ideophonic words were statistically significant and sizable. There was evidence of more activity for ideophones than for non-ideophonic words. On the right side, differences were statistically significant for all six channels, but there was a split, with half of the channels indicating greater activity for ideophones than words and the other half showing the opposite pattern. In short, there was strong evidence for greater overall activity in Broca’s area for words than for ideophones. We will return to split pattern in right Broca’s area in the discussion.

Figure 14. Brain areas showing greater activation for ideophonic than non-ideophonic words.

Note: Greater activation for ideophonic words is indicated in red and non-ideophonic in blue. Darker color indicates a higher t-value in the GLM model. Areas of interest hypothesized to show greater activity for ideophones than non-ideophonic words are marked with red lines, and homologous regions in the left hemisphere are marked with yellow lines.

Next, fNIRS detected significantly more activity for non-ideophonic words for five out of six channels covering left Wernicke’s area. For right Wernicke’s area, significantly more activity was detected for four of the six channels, and greater activity for words than ideophones was found in only one channel. In sum, like the Broca’s results, Wernicke’s results showed greater overall neural activity for non-ideophonic words than for ideophones in the left hemisphere and similar but slightly more mixed results for the right hemisphere. For left and right middle temporal areas involved in language processing, findings were mixed.

4. Discussion

The major questions for this study were: (1) Does neural activity (as measured by fNIRS signals) for each of the ideophone types included in this study (pure VISUAL, VISUAL + MOTION, pure SOUND, and SOUND + MOTION) match activity expected based on the theoretical constructs aligning with each type? (2) Does neural activity differ for native Kichwa speakers when listening to ideophonic as opposed to non-ideophonic words in Kichwa? Evidence of neural activity differing according to ideophone type would support the idea of contextual independence of ideophone meanings, given that all ideophones were presented in isolation (i.e., without linguistic or non-linguistic contextualization). Neural activity differing between ideophones and non-ideophones would suggest possible differing semantic or linguistic features.

4.1. Summary of results

Regarding the first question, the answer is complex. VISUAL ideophones produced more activity overall in fNIRS channels covering areas associated with visual imagery than did SOUND ideophones. Also, while frontal areas of the brain relevant to discriminating patterns from their backgrounds were activated more for SOUND ideophones than for VISUAL, posterior areas of the brain aligning more closely with shape, pattern, and color recognition were activated more for VISUAL than SOUND ideophones. Furthermore, channels covering areas involved in imagining motion or physical activity were typically also activated for both VISUAL and MOTION ideophones, adding additional support for the notion that visually oriented neural activity occurs when speakers of Kichwa hear VISUAL ideophones. Further supporting the connection between VISUAL ideophones and neural visualization activity, positive significant t-values in the fusiform gyrus were for channels located in the posterior lateral temporal and occipital cortices, areas where body and face images regularly generate activity (Engell & McCarthy, Reference Engell and McCarthy2014) and where familiar patterns such as common chess piece configurations are also thought to be stored (Campitelli et al., Reference Campitelli, Gobet, Head, Buckley and Parker2007). Higher oxygenation was also found in these posterior areas for MOTION-related ideophones than for non-MOTION. Such ideophones are regularly associated with bodily movement and with the depiction of shapes or patterns with one’s hands or body. The anterior regions of the fusiform gyrus can also show activity for imagery, but they are more typically implicated for semantic memory (Mion et al., Reference Mion, Patterson, Acosta-Cabronero, Pengas, Izquierdo-Garcia, Hong, Fryer, Williams, Hodges and Nestor2010) and may be activated when thinking of common features of an object that may or may not be readily visualized. SOUND ideophones generated less activity than visual in the posterior regions of the fusiform gyrus and more in the anterior. This might be due to the notion that the path from sound to meaning is thought to align more with superior and anterior regions of the temporal gyrus (Binder et al., Reference Binder, Frost, Hammeke, Bellgowan, Springer, Kaufman and Possing2000; Boatman, Reference Boatman2004) than with the posterior and inferior regions seen to be active for VISUAL ideophones in our data. Activity in the posterior inferior temporal gyrus related to semantic processing is largely left-dominant (Hickok & Poeppel, Reference Hickok and Poeppel2007), and our data shows the same dominance for SOUND ideophones (i.e., larger t-values were produced for the left posterior regions than for the right).

Concerning areas specifically associated with color imagination, VISUAL ideophones were connected with greater neural activity than SOUND ideophones. However, for pattern imagery, activity was less than for SOUND ideophones for three of the four channels. Overall, significantly higher SOUND-oriented neural activity in areas that involve visual activity was largely in the frontal areas of the brain also associated with semantic processing. Furthermore, imagery inherent in distinguishing patterns from their backgrounds is complex and might not be as readily evident in these anterior areas we listed under visual areas (see Aminoff et al., Reference Aminoff, Kveraga and Bar2013), making this finding less surprising.

The posterior areas of the brain more definitively associated with shape and pattern recognition did show greater activity for VISUAL than for SOUND ideophones: overall, VISUAL ideophones produced neural activity in areas associated with visualization more than SOUND ideophones; furthermore, areas where visualization and motion both typically produce activity consistently yielded higher t-values for VISUAL and MOTION ideophones. Finally, a comparison of pure VISUAL with VISUAL + MOTION ideophones suggested that pattern imagery and color imagination were evidenced more on the latter ideophone type, which were not thought to involve such imagery, than on the former, which were. So, although VISUAL ideophones do not universally elicit greater activity in areas associated with pattern recognition than SOUND ideophones, VISUAL ideophones thought to regularly involve pattern recognition elicit greater evidence of activity in areas connected with pattern recognition than VISUAL ideophones not thought to involve pattern recognition.

Regarding the question of whether ideophones activate areas associated with linguistic processing at a level similar to non-ideophonic words, the data show that ideophonic words involve these areas significantly more than non-ideophonic words. Greater left-hemisphere activity was seen in both Broca’s and Wernicke’s areas for ideophonic words than for non-ideophones. In the right hemisphere, several channels showed more activity for ideophones and others less. Some researchers have hypothesized that right-brain counterparts of areas on the left typically associated with language processing are utilized for more complex or specific processing of sounds and languages. Kanero et al. (Reference Kanero, Imai, Okuda, Okada and Matsuda2014), for example, suggest that motion- and shape-mimetic words are processed on the right side of the brain in Wernicke’s counterpart and closely neighboring areas, and that sound symbolism is also manifest in this right region. Hashimoto et al. (Reference Hashimoto, Usui, Taira, Nose, Haji and Kojima2006) have found greater activity bilaterally in the inferior frontal gyrus for onomatopoeia than for nouns, and our finding of greater activity for ideophones than non-ideophonic words is therefore not surprising. Hashimoto’s discovery of broader neurological activity overall for onomatopoeia than for nouns aligns with our discovery of broader activity for ideophones than for non-ideophonic words. There is certainly evidence in neurolinguistic research of differential activity between right and left Broca’s, with the right associated more with non-verbal communication, emotion, pragmatics, and contextual features than the left (Hartwigsen et al., Reference Hartwigsen, Neef, Camilleri, Margulies and Eickhoff2019). To further clarify differences in neurological activity between ideophones and non-ideophonic words, more focused studies involving non-ideophonic words selected based on visual, motion, and sound attributes parallel with the ideophones chosen in each category would be valuable.

4.2. Implications

Our finding of neurological distinctions largely aligning with the supercatetories found in the sensori-semantic map of Kichwa ideophones proposed by Nuckolls (Reference Nuckolls, Akita and Pardeshi2019) not only provides support for Nuckolls’ categorizations but also enforces the idea that sensory information can be conveyed via ideophones independent of context. Also, the areas of the brain we imaged are associated not just with sensory perceptions but also with the conceptual recognition and representations that regularly follow sensory activity (i.e., seeing colors and experiencing neural activity in occipital areas and the frontal eye fields would be followed by a neural representation and categorization of colors in the fusiform gyrus and right inferior parietal lobe). Such representation and categorization regularly occur and are consistently referred to when connecting semantic knowledge with words (Kiefer & Pulvermüller, Reference Kiefer and Pulvermüller2012). Hence we suggest that ideophones activate independent semantic information. While they may not carry the same amount or types of information that non-ideophonic nouns typically convey (e.g., thematic role, lexical relations, componential information, or sense relations), our data provide some evidence that they are richer semantically than what has been claimed (e.g., Bodomo, Reference Bodomo2006; Moshi, Reference Moshi1993).

When evaluating ideophone semantics, linguists frequently prioritize the auditory experience, and they have even tentatively suggested that languages exist that feature only SOUND ideophones (i.e., ‘onomatopoeia’). We remind the reader that what counts as sound events could be imitations of sound arising from a vocal tract, whether of a human or a non-human. Other kinds of sounds, such as the sound of an object hitting the ground, would be categorized in our schema as SOUND + MOTION.

Our fNIRS results lend support for distinguishing between pure SOUND ideophones and SOUND + MOTION ideophones. What we call pure sound ideophones (category 3) and SOUND + MOTION ideophones (category 4) have distinctive neuroprofiles. This is significant because tentative assertions (Dingemanse, Reference Dingemanse2012) that compare ideophones’ semantic inventories have stated that there are languages such as Navajo that feature only SOUND ideophones. Yet, what counts as a sound ideophone has not always been clearly defined. Within the framework of our categories, a cursory look at Navajo ideophones found in Webster (Reference Webster2008) reveals what we considered to be both SOUND + MOTION ideophones as well as pure SOUND ideophones.

A terminological problem contributing to this confusion is that the term ‘onomatopoeia’ has been used to refer to sound-only ideophones (Han et al., Reference Han, Choi, Chang, Jeong and Nam2005a), as well as to cast a wider net, including what we could call SOUND + MOTION ideophones (see Körtvélyessy & Andrej, Reference Körtvélyessy and Andrej2023). Are there in fact languages that only feature pure SOUND ideophones? If not, then approaches to mapping ideophone semantics’ multisensoriality, such as that of McLean (Reference McLean2020), Nuckolls (Reference Nuckolls, Akita and Pardeshi2019), and Van Hoey (Reference Van Hoey and Li2022), become even more urgent. Van Hoey (Reference Van Hoey and Li2022) offers not just a comprehensive synthesis that addresses cognitive semantic approaches to ideophones but also a schematic that includes most sensory categories reported for ideophone-rich languages. However, he points out that such two-dimensional maps have built-in limitations, and that we are not yet clear on the complexities of multisensorial pathways. Our study justifies further neuroimaging work for the purpose of untangling more intricacies of ideophone semantics, which can no longer be viewed as strictly dependent on their contexts of use.

4.3. Future work

Broader and deeper imaging may be necessary for a thorough understanding of neurological activity elicited by various ideophone types. fNIRS does not allow for observation deep into the cortex or into subcortical regions of the brain involved in visualization, motion, and sound processing. It may therefore be necessary to use fMRI to evaluate whether ideophones activate these deeper regions and to capture more precise information regarding location. We also did not capture activity in most of the occipital lobe primarily because we were trying to minimize equipment to allow for secure and safe transport via carry-on luggage to Ecuador. A secondary reason is that we assumed the type of imagery entailed with VISUAL ideophones would not be obviously manifest in the occipital lobe because participants were not seeing an image as a prompt but were hearing ideophones associated with imagery. However, neuronal activity in the middle occipital lobe is regularly seen during visual imagery tasks (Huang et al., Reference Huang, Qiu, Shen, Zhang, Song, Qi, Gong and Xie2013), including those not involving actually seeing images. Also, there is functional connectivity between the cerebellum and the occipital cortex and other brain regions involved in processing the visual aspects of motion (Baumann et al., Reference Baumann, Borra, Bower, Cullen, Habas, Ivry, Leggio, Mattingley, Molinari, Moulton, Paulin, Pavlova, Schmahmann and Sokolov2015). This area could, therefore, be relevant to the constructs associated with the ideophone categories studied here. We therefore suggest that future neurological research on VISUAL and MOTION ideophones include the most posterior areas of the brain.

Next, our team encountered difficulties obtaining strong signals from the Kichwa speakers, which we largely attributed to their thick dark hair. ‘It is an open secret in the field that fNIRS works best on fair skin and thin, blond (unpigmented) hair’ (Bradford et al., Reference Bradford, DeFalco, Perkins, Carbajal, Kwasa, Goodman, Jackson, Richardson, Woodley, Neuberger, Sandoval, Huang and Joyner2022). Because of this, careful planning in advance is necessary to adjust protocols for scalps with dark hair or other such obstacles to infrared light. Furthermore, optodes such as the more sensitive Avalanche photodioptodes (APDs) produced by NIRx International are likely to yield clearer signals and data.

One area where further research is clearly in order is the connection between ideophones and emotion and cognition. In our hypothesized schema (Figure 3), the subcategories of emotion and cognition occurred with the supercategory SOUND. According to our findings, emotion and cognition were evidenced in a larger percentage of channels for SOUND + MOTION ideophones than pure SOUND. However, several channels over key areas associated with decision-making, emotion regulation, goal-directed behavior, attention control, and working memory depicted greater activity for pure SOUND ideophones, suggesting that these aspects of cognition may align with our proposed schema. Nevertheless, given this lack of broader overall prominence for SOUND ideophones over SOUND + MOTION ideophones in the PFC and the frontal cortex more generally, it is impossible to state that the former involve cognition and emotion and the latter do not. Our results do demonstrate that cognition and emotion are evident in both SOUND and SOUND + MOTION ideophones.

More careful separation of ideophones to tease out hypothesized sensory features and the level of cognition and emotion involved via neurological activity is clearly in order. The term ‘multisensorial’ suggests that individual ideophones might involve multiple senses. It is not hard to imagine scenarios such as: an ideophone representative of the sound of a specific bird activating imagery associated with its unique shape, color, or even camouflaging visual features; an ideophone associated with the s-shaped path of a snake eliciting fear and a plan to remain motionless; an ideophone linked with fruit covering a tree being also associated with the motion of its accompanying gesture or a plan to climb/move up the tree to pick fruit.

To explore such multisensorial connections and ambiguities, it could be valuable to have participants do more than simply listen to the decontextualized ideophones. For example, they could list ideophone meanings, provide contexts in which they are used, or categorize them based on meaning. They could also rate each ideophone based on how they experience them sensorially. Such data would provide a means of double-checking hypothesized semantic properties and of triangulating with neurological data. Additionally, ratings and neurological responses could be elicited from non-speakers of Kichwa to determine the level of iconicity that might be apparent to non-Kichwa informants. Such an approach would align with recent work published in this journal on the iconicity of Japanese ideophones (McLean et al., Reference McLean, Dunn and Dingemanse2023). In terms of fNIRS or other neuroimaging, one could carry out functional connectivity analyses, which allow for an exploration of the timing of neural activity in specific cortical areas, including correlations between activity, sequences, and so forth. This approach has been used to study visuospatial working memory (Baker et al., Reference Baker, Bruno, Gundran, Hosseini and Reiss2018) in ways that recruit brain areas also highlighted in the current study, and it is routinely used in fMRI and fNIRS research for similar purposes.

Finally, we suggest and plan to carry out more carefully controlled comparisons between ideophonic and non-ideophonic words to better understand the uniqueness, sensory vividness, and expressive and depictive qualities of ideophones (Dingemanse, Reference Dingemanse2018) that the current study helps elucidate.

5. Conclusion

To conclude, speakers’ reactions to decontextualized sound clips of ideophones are evidence for their sensory-based semantics. This claim is supported by the conditions of the current experiment that decontextualized ideophones in two ways. First, they were clipped out of their spoken sentential contexts. Second, the visual aspects of their expression, including any gestures that may have accompanied their use, were eliminated. Despite the fact that ideophones were doubly decontextualized, speakers’ neural responses, as monitored by fNIRS, reveal principled reactions to the sensory semantics of these isolated sound clips.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/langcog.2023.55.

Data availability statement

Stimuli and files used in our study can be found in the Open Science Framework repository (https://osf.io/fqbzp/). This includes (1) lists of ideophone and non-ideophone stimuli, (2) corresponding audio files, and (3) standardized (z-transformed) fNIRS channel-specific event-related hemodynamic averages (oxygenated, deoxygenated, and total hemoglobin) for each of the four ideophone categories and non-ideophonic word stimuli, and all 99 channels for each chromophore and stimulus category.

Footnotes

This article was originally published with a misspelled author name. The error has been corrected and the online HTML and PDF versions updated.

References

Acharya, J. N., Hani, A., Cheek, J., Thirumala, P., & Tsuchida, T. N. (2016). American Clinical Neurophysiology Society Guideline 2: Guidelines for standard electrode position nomenclature. Journal of Clinical Neurophysiology, 33, 308311. https://doi.org/10.1097/WNP.0000000000000316CrossRefGoogle ScholarPubMed
Akita, K. (2016). Grammatical and functional properties of mimetics in Japanese. In Iwasaki, N., Sells, P., & Akita, K. (Eds.), The Grammar of Japanese Mimetics (pp. 2034). Routledge. https://doi.org/10.4324/9781315646695-3CrossRefGoogle Scholar
Aminoff, E. M., Kveraga, K., & Bar, M. (2013). The role of the parahippocampal cortex in cognition. Trends Cognitive Science, 17(8), 379390. https://doi.org/10.1016/j.tics.2013.06.009CrossRefGoogle ScholarPubMed
Aryani, A., Hsu, C. T., & Jacobs, A. M. (2019). Affective iconic words benefit from additional sound-meaning integration in the left amygdala. Human Brain Mapping, 40(18), 52895300. https://doi.org/10.1002/hbm.24772CrossRefGoogle ScholarPubMed
Baker, J. M., Bruno, J. L., Gundran, A., Hosseini, S. H., & Reiss, A. L. (2018). fNIRS measurement of cortical activation and functional connectivity during a visuospatial working memory task. PLoS One, 13(8), e0203233. https://doi.org/10.1371/journal.pone.0203233CrossRefGoogle ScholarPubMed
Barsalou, L. W. (2008). Grounded cognition. Annual Review of Psychology, 59(1), 617645. https://doi.org/10.1146/annurev.psych.59.103006.093639CrossRefGoogle ScholarPubMed
Baumann, O., Borra, R. J., Bower, J. M., Cullen, K. E., Habas, C., Ivry, R. B., Leggio, M., Mattingley, J. B., Molinari, M., Moulton, E. A., Paulin, M. G., Pavlova, M. A., Schmahmann, J. D., & Sokolov, A. A. (2015). Consensus paper: The role of the cerebellum in perceptual processes. Cerebellum, 14(2), 197220. https://doi.org/10.1007/s12311-014-0627-7CrossRefGoogle ScholarPubMed
Ben-Shabat, E., Matyas, T. A., Pell, G. S., Brodtmann, A., & Carey, L. M. (2015). The right supramarginal gyrus is important for proprioception in healthy and stroke-affected participants: A functional MRI study. Frontiers in Neurology, 6, 248. https://doi.org/10.3389/fneur.2015.00248CrossRefGoogle ScholarPubMed
Binder, J. R. (2017). Current controversies on Wernicke’s area and its role in language. Current Neurology and Neuroscience Reports, 17(8), 58. https://doi.org/10.1007/s11910-017-0764CrossRefGoogle ScholarPubMed
Binder, J. R., Frost, J. A., Hammeke, T. A., Bellgowan, P. S. F., Springer, J. A., Kaufman, J. N., & Possing, E. T. (2000). Human temporal lobe activation by speech and nonspeech sounds. Cerebral Cortex, 10(5), 512528. https://doi.org/10.1093/cercor/10.5.512CrossRefGoogle ScholarPubMed
Boatman, D. (2004). Cortical bases of speech perception: Evidence from functional lesion studies. Cognition, 92(1–2), 4765. https://doi.org/10.1016/j.cognition.2003.09.010CrossRefGoogle ScholarPubMed
Bodomo, A. (2006, April). The structure of ideophones in African and Asian languages: The case of Dagaare and Cantonese. In Selected proceedings of the 35th Annual Conference on African Linguistics: African languages and linguistics in broad perspectives (pp. 203213). Cascadilla Proceedings Project.Google Scholar
Boggio, P. S., Wingenbach, T. S., da Silveira Coêlho, M. L., Comfort, W. E., Murrins Marques, L., & Alves, M. V. C. (2023). Social and affective neuroscience of everyday human interaction: From theory to methodology. Springer Nature. https://doi.org/10.1007/978-3-031-08651-9CrossRefGoogle Scholar
Bradford, D. E., DeFalco, A., Perkins, E. R., Carbajal, I., Kwasa, J., Goodman, F. R., Jackson, F., Richardson, L. N. S., Woodley, N., Neuberger, L., Sandoval, J. A., Huang, H. J., & Joyner, K. J. (2022). Whose signals are being amplified? Toward a more equitable clinical psychophysiology. Clinical Psychological Science. https://doi.org/10.1177/21677026221112117Google Scholar
Campitelli, G., Gobet, F., Head, K., Buckley, M., & Parker, A. (2007). Brain localization of memory chunks in chessplayers. International Journal of Neuroscience, 117(12), 16411659. https://doi.org/10.1080/00207450601041955CrossRefGoogle ScholarPubMed
Dale, A. M. (1999). Optimal experimental design for event-related fMRI. Human Brain Mapping, 8, 109114. https://doi.org/10.1002/(SICI)1097-0193(1999)8:2/3<109::AID-HBM7>3.0.CO;2-W3.0.CO;2-W>CrossRefGoogle ScholarPubMed
de Ruiter, J. P. (2000). The production of gesture and speech. In McNeill, D. (Ed.), Language and gesture (pp. 284311). Cambridge University Press. https://doi.org/10.1017/cbo9780511620850.018CrossRefGoogle Scholar
de Schryver, G.-M. (2009). The lexicographic treatment of ideophones in Zulu. Lexikos, 19, 3454. https://doi.org/10.5788/19-0-429CrossRefGoogle Scholar
Dingemanse, M. (2012). Advances in the cross-linguistic study of ideophones. Language and Linguistics Compass, 6(10), 654672. https://doi.org/10.1002/lnc3.361CrossRefGoogle Scholar
Dingemanse, M. (2018). Redrawing the margins of language: Lessons from research on ideophones. Glossa: A Journal of General Linguistics, 3(1), 130. https://doi.org/10.5334/gjgl.444Google Scholar
Dingemanse, M. (2019). ‘Ideophone’ as a comparative concept. In Akita, K. & Pardeshi, P. (Eds.), Ideophones, mimetics and expressives (pp. 1334). John Benjamins. https://doi.org/10.1075/ill.16.02dinCrossRefGoogle Scholar
Dingemanse, M., & Akita, K. (2017). An inverse relation between expressiveness and grammatical integration: On the morphosyntactic typology of ideophones, with special reference to Japanese1. Journal of Linguistics, 53(3), 501532. https://doi.org/10.1017/S002222671600030XCrossRefGoogle Scholar
Dingemanse, M., Schuerman, W., Reinisch, E., Tufvesson, S., & Mitterer, H. (2016). What sound symbolism can and cannot do: Testing the iconicity of ideophones from five languages. Language, 92(2), e117e133. https://doi.org/10.1353/lan.2016.0034CrossRefGoogle Scholar
Egashira, Y., Choi, D., Motoi, M., Nishimura, T., & Watanuki, S. (2015). Differences in event-related potential responses to Japanese onomatopoeias and common words. Psychology, 06(13), 16531660. https://doi.org/10.4236/psych.2015.613161CrossRefGoogle Scholar
Engell, A. D., & McCarthy, G. (2014). Face, eye, and body selective responses in fusiform gyrus and adjacent cortex: an intracranial EEG study. Frontiers in Human Neuroscience, 8, 642. https://doi.org/10.3389/fnhum.2014.00642CrossRefGoogle ScholarPubMed
Genovese, C. R., Lazar, N. A., & Nichols, T. (2002). Thresholding of statistical maps in functional neuroimaging using the false discovery rate. Neuroimage, 15(4), 870878. https://doi.org/10.1006/nimg.2001.1037CrossRefGoogle ScholarPubMed
Ghanavati, E., Salehinejad, M. A., Nejati, V., & Nitsche, M. A. (2019). Differential role of prefrontal, temporal and parietal cortices in verbal and figural fluency: Implications for the supramodal contribution of executive functions. Scientific Reports, 9(1), 3700. https://doi.org/10.1038/s41598-019-40273-7CrossRefGoogle ScholarPubMed
Han, J. H., Choi, W., Chang, W., Jeong, O. R., & Nam, K. (2005a). Neuroanatomical analysis for onomatopoeia. In Proceedings of the 2005 Annual Conference on Human Language and Technology. Springer. https://koreascience.kr/article/CFKO200429013545400.pdfGoogle Scholar
Han, J. H., Choi, W., Chang, Y., Jeong, OR, & Nam, K. (2005b). Neuroanatomical analysis for onomatopoeia and phainomime words: fMRI Study. In: Wang, L., Chen, K., & Ong, Y. S. (Eds.) Advances in Natural Computation. ICNC 2005. Lecture Notes in Computer Science, vol 3610. Springer. https://doi.org/10.1007/11539087_115Google Scholar
Hartwigsen, G., Neef, N. E., Camilleri, J. A., Margulies, D. S., & Eickhoff, S. B. (2019). Functional segregation of the right inferior frontal gyrus: Evidence from coactivation-based parcellation. Cerebral Cortex, 29(4), 15321546. https://doi.org/10.1093/cercor/bhy049CrossRefGoogle ScholarPubMed
Hashimoto, T., Usui, N., Taira, M., Nose, I., Haji, T., & Kojima, S. (2006). The neural mechanism associated with the processing of onomatopoeic sounds. Neuroimage, 31(4), 17621770.CrossRefGoogle ScholarPubMed
Heilbronner, U., & Munte, T. F. (2013). Rapid event-related near-infrared spectroscopy detects age-related qualitative changes in the neural correlates of response inhibition. Neuroimage, 65, 408415. https://doi.org/10.1016/j.neuroimage.2012.09.066CrossRefGoogle ScholarPubMed
Hickok, G., & Poeppel, D. (2007). The cortical organization of speech processing. Nature Reviews: Neuroscience, 8(5), 393402. https://doi.org/10.1038/nrn2113CrossRefGoogle ScholarPubMed
Hitomi, T., Gerrits, R., & Hartsuiker, R. J. (2021). Using functional near-infrared spectroscopy to study word production in the brain: A picture-word interference study. Journal of Neurolinguistics, 57, 100957. https://doi.org/10.1016/j.jneuroling.2020.100957CrossRefGoogle Scholar
Huang, P., Qiu, L., Shen, L., Zhang, Y., Song, Z., Qi, Z., Gong, Q., & Xie, P. (2013). Evidence for a left-over-right inhibitory mechanism during figural creative thinking in healthy nonartists. Human Brain Mapping, 34(10), 27242732. https://doi.org/10.1002/hbm.22093CrossRefGoogle ScholarPubMed
Huppert, T. J., Diamond, S. G., Franceschini, M. A., & Boas, D. A. (2009). HomER: A review of time-series analysis methods for near-infrared spectroscopy of the brain. Applied Optics, 48(10), D280. https://doi.org/10.1364/ao.48.00d280CrossRefGoogle Scholar
Iacoboni, M., Woods, R. P., Lenzi, G. L., & Mazziotta, J. C. (1997). Merging of oculomotor and somatomotor space coding in the human right precentral gyrus. Brain, 120, 16351645. https://doi.org/10.1093/brain/120.9.1635CrossRefGoogle ScholarPubMed
Kanero, J., Imai, M., Okuda, J., Okada, H., & Matsuda, T. (2014). How sound symbolism is processed in the brain: A study on Japanese mimetic words. PLoS One, 9(5), e97905. https://doi.org/10.1371/journal.pone.0097905CrossRefGoogle Scholar
Kiefer, M., & Pulvermüller, F. (2012). Conceptual representations in mind and brain: Theoretical developments, current evidence and future directions. Cortex, 48(7), 805825.CrossRefGoogle ScholarPubMed
Kitada, R., Yoshihara, K., Sasaki, A. T., Hashiguchi, M., Kochiyama, T., & Sadato, N. (2014). The brain network underlying the recognition of hand gestures in the blind: the supramodal role of the extrastriate body area. Journal of Neuroscience, 34(30), 1009610108. https://doi.org/10.1523/JNEUROSCI.0500-14.2014CrossRefGoogle ScholarPubMed
Körtvélyessy, L., & Andrej, Ľ. (2023). Derivational networks of onomatopoeias in English and Slovak. Canadian Journal of Linguistics/Revue Canadienne De Linguistique, 68(1), 74107. https://doi.org/10.1017/cnj.2022.42CrossRefGoogle Scholar
Kutas, M., & Federmeier, K. D. (2011). Thirty years and counting: Finding meaning in the N400 component of the event-related brain potential (ERP). Annual Review of Psychology, 62, 621647. https://doi.org/10.1146/annurev.psych.093008.131123CrossRefGoogle ScholarPubMed
Li, R., Zhao, C., Wang, C., Wang, J., & Zhang, Y. (2020). Enhancing fNIRS analysis using EEG rhythmic signatures: An EEG-informed fNIRS analysis study. IEEE Transactions on Biomedical Engineering, 67(10), 27892797. https://doi.org/10.1109/TBME.2020.2971679CrossRefGoogle ScholarPubMed
Lockwood, G., & Dingemanse, M. (2015). Iconicity in the lab: a review of behavioral, developmental, and neuroimaging research into sound-symbolism. Frontiers in Psychology, 6, 1246. https://doi.org/10.3389/fpsyg.2015.01246Google ScholarPubMed
McLean, B. (2020). Revising an implicational hierarchy for the meanings of ideophones, with special reference to Japonic. Linguistic Typology, 25(3), 507549. https://doi.org/10.1515/lingty-2020-2063CrossRefGoogle Scholar
McLean, B., Dunn, M., & Dingemanse, M. (2023). Two measures are better than one: Combining iconicity ratings and guessing experiments for a more nuanced picture of iconicity in the lexicon. Language and Cognition, 15, 124. https://doi.org/10.1017/langcog.2023.9CrossRefGoogle Scholar
Mion, M., Patterson, K., Acosta-Cabronero, J., Pengas, G., Izquierdo-Garcia, D., Hong, Y. T., Fryer, T. D., Williams, G. B., Hodges, J. R., & Nestor, P. J. (2010). What the left and right anterior fusiform gyri tell us about semantic memory. Brain, 133(11), 32563268. https://doi.org/10.1093/brain/awq272CrossRefGoogle ScholarPubMed
Moshi, L. (1993). Ideophones in KiVunjo‐Chaga. Journal of Linguistic Anthropology, 3(2), 185216. https://doi.org/10.1525/jlin.1993.3.2.185CrossRefGoogle Scholar
Nuckolls, J. B. (2019). Chapter 7. The sensori-semantic clustering of ideophonic meaning in Pastaza Quichua. In Akita, K. & Pardeshi, P. (Eds.), Ideophones, Mimetics and Expressives (pp. 167198). John Benjamins. https://doi.org/10.1075/ill.16.08nucCrossRefGoogle Scholar
Nuckolls, J. B. (2021). How do you even know what ideophones mean? Gestures’ contributions to ideophone semantics. Gesture, 19(2–3), 161195. https://doi.org/10.1075/gest.20005.nucCrossRefGoogle Scholar
Osaka, N. (2009). Walk-related mimic word activates the extrastriate visual cortex in the human brain: an fMRI study. Behavioural Brain Research, 198(1), 186189. https://doi.org/10.1016/j.bbr.2008.10.042CrossRefGoogle Scholar
Osaka, N. (2011). Ideomotor response and the neural representation of implied crying in the human brain: An fMRI study using onomatopoeia1. Japanese Psychological Research, 53(4), 372378. https://doi.org/10.1111/j.1468-5884.2011.00489.xCrossRefGoogle Scholar
Osaka, N., & Osaka, M. (2009). Gaze-related mimic word activates the frontal eye field and related network in the human brain: An fMRI study. Neuroscience Letters, 461(2), 6568. https://doi.org/10.1016/j.neulet.2009.06.023CrossRefGoogle Scholar
Peeters, D. (2016). Monitoring the level of attention by posture measurement and EEG. In Papafragou, A., Grodner, D., Mirman, D., & Trueswell, J. (Eds.), Proceedings of the annual meeting of the cognitive science society (CogSci 2016). Cognitive Science Society. https://cogsci.mindmodeling.org/2016/papers/0177/paper0177.pdfGoogle Scholar
Peirce, J., Gray, J. R., Simpson, S., MacAskill, M., Höchenberger, R., Sogo, H., Kastman, E., & Lindeløv, J. K. (2019). PsychoPy2: Experiments in behavior made easy. Behavior Research Methods, 51(1), 195203. https://doi.org/10.3758/s13428-018-01193-yCrossRefGoogle ScholarPubMed
Plichta, M. M., Heinzel, S., Ehlis, A. C., Pauli, P., & Fallgatter, A. J. (2007). Model-based analysis of rapid event-related functional near-infrared spectroscopy (NIRS) data: A parametric validation study. Neuroimage, 35(2), 625634. https://doi.org/10.1016/j.neuroimage.2006.11.028CrossRefGoogle Scholar
Price, C. J., Noppeney, U., Phillips, J., & Devlin, J. T. (2003). How is the fusiform gyrus related to category-specificity? Cognitive Neuropsychology, 20(3), 561574. https://doi.org/10.1080/02643290244000284CrossRefGoogle Scholar
Röders, D., Klepp, A., Schnitzler, A., Biermann-Ruben, K., & Niccolai, V. (2022). Induced and evoked brain activation related to the processing of onomatopoeic verbs. Brain Sciences, 12(4), 481. https://doi.org/10.3390/brainsci12040481CrossRefGoogle Scholar
Shibasaki, H. (2012). Cortical activities associated with voluntary movements and involuntary movements. Clinical Neurophysiology, 123(2), 229243. https://doi.org/10.1016/j.clinph.2011.07.042CrossRefGoogle ScholarPubMed
Simmons, W. K., Ramjee, V., Beauchamp, M. S., McRae, K., Martin, A., & Barsalou, L. W. (2007). A common neural substrate for perceiving and knowing about color. Neuropsychologia, 45(12), 28022810. https://doi.org/10.1016/j.neuropsychologia.2007.05.002CrossRefGoogle ScholarPubMed
Spagna, A., Hajhajate, D., Liu, J., & Bartolomeo, P. (2021). Visual mental imagery engages the left fusiform gyrus, but not the early visual cortex: A meta-analysis of neuroimaging evidence. Neuroscience and Biobehavioral Reviews, 122, 201217. https://doi.org/10.1016/j.neubiorev.2020.12.029CrossRefGoogle Scholar
Starrfelt, R., & Gerlach, C. (2007). The visual what for area: Words and pictures in the left fusiform gyrus. Neuroimage, 35(1), 334342. https://doi.org/10.1016/j.neuroimage.2006.12.003CrossRefGoogle ScholarPubMed
Van Hoey, T. (2022). Chapter 16: A semantic map for ideophones. In Li, T.F. (Eds.) Handbook of cognitive semantics. Brill. Preprint https://doi.org/10.31219/osf.io/muhpdGoogle Scholar
Vigliocco, G., Zhang, Y. E., Del Maschio, N., Todd, R., & Tuomainen, J. (2019). Electrophysiological signatures of English onomatopoeia. Language and Cognition, 12(1), 1535. https://doi.org/10.1017/langcog.2019.38CrossRefGoogle Scholar
Voeltz, F. K. E., & Kilian-Hatz, C. (Eds.) (2001). Ideophones. Typological studies in language 44. John Benjamins Press. https://doi.org/10.1075/tsl.44Google Scholar
Wang, X., Han, Z., He, Y., Caramazza, A., Song, L., & Bi, Y. (2013). Where color rests: spontaneous brain activity of bilateral fusiform and lingual regions predicts object color knowledge performance. Neuroimage, 76, 252263. https://doi.org/10.1016/j.neuroimage.2013.03.010CrossRefGoogle ScholarPubMed
Webster, A. (2008). ‘To give an imagination to the listener’: The neglected poetics of Navajo ideophony. Semiotica, 171(¼), 343365. https://doi.org/10.1515/SEMI.2008.081CrossRefGoogle Scholar
Weiner, K. S., & Zilles, K. (2016). The anatomical and functional specialization of the fusiform gyrus. Neuropsychologia, 83, 4862. https://doi.org/10.1016/j.neuropsychologia.2015.06.033CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Sensori-semantic map of Kichwa ideophones.

Figure 1

Figure 2. Bounding gesture accompanying ideophone ‘tɕem’.Note: The Kichwa utterance and translation are as follows:

Figure 2

Figure 3. Four ideophone stimuli categories for current study aligned with features from the sensori-semantic map of Kichwa ideophones in Figure 1.

Figure 3

Figure 4. Experimental procedure depiction.Note: (a) Procedure for the entire experiment. (b) Presentation parameters for a single trial.

Figure 4

Figure 5. fNIRS montage (optode configuration).Note: Sources are in red and marked with ‘S’ and detectors in blue and marked with ‘D’. Locations are indicated using the international 10–20 EEG system. Blue circles indicate short channel detectors, and solid blue lines indicate channels between sources and detectors.

Figure 5

Figure 6. Image depicting fNIRS cap placement and signal quality check process.

Figure 6

Figure 7. Areas of interest for VISUAL and SOUND ideophones.Note: Areas for VISUAL ideophones are marked in red and SOUND ideophones in blue. Dotted lines indicate areas that are also hypothesized to show greater activity for purely VISUAL/SOUND ideophones than for VISUAL + MOTION / SOUND + MOTION ideophones.

Figure 7

Figure 8. Areas of interest for MOTION ideophones.Note: Areas circled in red typically show greater activity during perception, visualization, or performance of motor activity.

Figure 8

Figure 9. Cortical areas of interest with hypothesized differences in neural activity between ideophonic and non-ideophonic words.Note: Areas of interest hypothesized to show greater activity for ideophones than non-ideophonic words (red). Homologous regions in the left hemisphere were included for comparison (yellow).

Figure 9

Figure 10. Brain areas showing greater activation for VISUAL or SOUND ideophones.Note: Red indicates greater activation for VISUAL ideophones than SOUND ideophones, and blue greater activation for SOUND than VISUAL. Darker color indicates a higher t-value in the GLM model. Areas of interest hypothesized to show greater activity for VISUAL or SOUND ideophones are also marked with red and blue lines, respectively.

Figure 10

Figure 11. Brain areas showing greater activation for pure VISUAL or VISUAL + MOTION ideophones.Note: Red indicates greater activation for VISUAL ideophones than VISUAL + MOTION ideophones, and blue greater activation for VISUAL + MOTION than VISUAL. Darker color indicates a higher t-value in the GLM model. Areas of interest hypothesized to show greater activity for +MOTION are surrounded with blue lines, and more purely VISUAL areas are surrounded with red lines.

Figure 11

Figure 12. Brain areas showing greater activation for pure SOUND or SOUND + MOTION ideophones.Note: Red indicates greater activation for SOUND ideophones than SOUND + MOTION ideophones, and blue greater activation for SOUND + MOTION than SOUND. Darker color indicates a higher t-value in the GLM model. Areas of interest hypothesized to show greater activity for +MOTION are surrounded with blue lines, and areas associated with cognition and emotion are surrounded with red lines.

Figure 12

Figure 13. Brain areas showing greater activation for MOTION or non-MOTION ideophones.Note: Red indicates greater activation for MOTION ideophones than non-MOTION, and blue greater activity for non-MOTION ideophones than MOTION. Darker color indicates a higher t-value in the GLM model. Areas of interest hypothesized to show greater activity for MOTION ideophones are surrounded with red lines.

Figure 13

Figure 14. Brain areas showing greater activation for ideophonic than non-ideophonic words.Note: Greater activation for ideophonic words is indicated in red and non-ideophonic in blue. Darker color indicates a higher t-value in the GLM model. Areas of interest hypothesized to show greater activity for ideophones than non-ideophonic words are marked with red lines, and homologous regions in the left hemisphere are marked with yellow lines.

Supplementary material: File

Dewey et al. supplementary material

Dewey et al. supplementary material
Download Dewey et al. supplementary material(File)
File 605.6 KB