We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This article presents a systematic review on the use of eye-tracking technology to assess the mental workload of unmanned aircraft system (UAS) operators. With the increasing use of unmanned aircraft in military and civilian operations, understanding the mental workload of these operators has become essential for ensuring mission effectiveness and safety. The review covered 26 studies that explored the application of eye-tracking to capture nuances of visual attention and assess cognitive load in real-time. Traditional methods such as self-assessment questionnaires, although useful, showed limitations in terms of accuracy and objectivity, highlighting the need for advanced approaches like eye-tracking. By analysing gaze patterns in simulated environments that reproduce real challenges, it was possible to identify moments of higher mental workload, areas of concentration and sources of distraction. The review also discussed strategies for managing mental workload, including adaptive design of human-machine interfaces. The analysis of the studies revealed a growing relevance and acceptance of eye-tracking as a diagnostic and analytical tool, offering guidelines for the development of interfaces and training that dynamically respond to the cognitive needs of operators. It was concluded that eye-tracking technology can significantly contribute to the optimisation of UAS operations, enhancing both the safety and efficiency of military and civilian missions.
The scarce literature on the processing of internally headed relative clauses (IHRCs) seems to challenge the universality of the subject advantage (e.g., Lau & Tanaka [2021, Glossa: A Journal of General Linguistics, 6(1), 34], for spoken languages; Hauser et al. [2021, Glossa: A Journal of General Linguistics, 6(1), 72], for sign languages). In this study, we investigate the comprehension of subject and object IHRCs in Italian Sign Language (LIS) deaf native and non-native signers, and hearing LIS/Italian CODAs (children of deaf adults). We use the eye-tracking Visual-only World Paradigm (Hauser & Pozniak [2019, Poster presented at the AMLAP 2019 conference]) recording online and offline responses. Results show that a subject advantage is detected in the online and offline responses of CODAs and in the offline responses of deaf native signers. Results also reveal a higher rate of accuracy in CODAs' responses. We discuss the difference in performance between the two populations in the light of bilingualism-related cognitive advantages, and lack of proper educational training in Italian and LIS for the deaf population in Italy.
Two major trends on children’s skills to comprehend metaphors have governed the literature on the subject: the literal stage hypothesis vs. the early birds hypothesis (Falkum, 2022). We aim to contribute to this debate by testing children’s capability to comprehend novel metaphors (‘X is a Y’) in Spanish with a child-friendly, picture selection task, while also tracking their gaze. Further, given recent findings on the development of metonymy comprehension suggesting a U-shaped developmental curve for this phenomenon (Köder & Falkum, 2020), we aimed to determine the shape of the developmental trajectory of novel metaphor comprehension, and to explore how both types of data (picture selection and gaze behavior) relate to each other. Our results suggest a linear developmental trajectory with 6-year-olds significantly succeeding in picture selection and consistently looking at the metaphorical target even after question onset.
We assess the feasibility of conducting web-based eye-tracking experiments with children using two methods of webcam-based eye-tracking: automatic gaze estimation with the WebGazer.js algorithm and hand annotation of gaze direction from recorded webcam videos. Experiment 1 directly compares the two methods in a visual-world language task with five to six year-old children. Experiment 2 more precisely investigates WebGazer.js’ spatiotemporal resolution with four to twelve year-old children in a visual-fixation task. We find that it is possible to conduct web-based eye-tracking experiments with children in both supervised (Experiment 1) and unsupervised (Experiment 2) settings – however, the webcam eye-tracking methods differ in their sensitivity and accuracy. Webcam video annotation is well-suited to detecting fine-grained looking effects relevant to child language researchers. In contrast, WebGazer.js gaze estimates appear noisier and less temporally precise. We discuss the advantages and disadvantages of each method and provide recommendations for researchers conducting child eye-tracking studies online.
Bilinguals activate both of their languages as they process written words, regardless of modality (spoken or signed); these effects have primarily been documented in single word reading paradigms. We used eye-tracking to determine whether deaf bilingual readers (n = 23) activate American Sign Language (ASL) translations as they read English sentences. Sentences contained a target word and one of the two possible prime words: a related prime which shared phonological parameters (location, handshape or movement) with the target when translated into ASL or an unrelated prime. The results revealed that first fixation durations and gaze durations (early processing measures) were shorter when target words were preceded by ASL-related primes, but prime condition did not impact later processing measures (e.g., regressions). Further, less-skilled readers showed a larger ASL co-activation effect. Together, the results indicate that ASL co-activation impacts early lexical access and can facilitate reading, particularly for less-skilled deaf readers.
We investigated the retention of surface linguistic information during reading using eye-tracking. Departing from a research tradition that examines differences between meaning retention and verbatim memory, we focused on how different linguistic factors affect the retention of surface linguistic information. We examined three grammatical alternations in German that differed in involvement of changes in morpho-syntax and/or information structure, while their propositional meaning is unaffected: voice (active vs. passive), adverb positioning, different realizations of conditional clauses. Single sentences were presented and repeated, either identical or modified according to the grammatical alternation (with controlled interval between them). Results for native (N = 60) and non-native (N = 58) German participants show longer fixation durations for modified versus unmodified sentences when information structural changes are involved (voice, adverb position). In contrast, mere surface grammatical changes without a functional component (conditional clauses) did not lead to different reading behavior. Sensitivity to the manipulation was not influenced by language (L1, L2) or repetition interval. The study provides novel evidence that linguistic factors affect verbatim retention and highlights the importance of eye-tracking as a sensitive measure of implicit memory.
Social hierarchical information impacts language comprehension. Nevertheless, the specific process underlying the integration of linguistic and extralinguistic sources of social hierarchical information has not been identified. For example, the Chinese social hierarchical verb 赡养, /shan4yang3/, ‘support: provide for the needs and comfort of one’s elders’, only allows its Agent to have a lower social status than the Patient. Using eye-tracking, we examined the precise time course of the integration of these semantic selectional restrictions of Chinese social hierarchical verbs and extralinguistic social hierarchical information during natural reading. A 2 (Verb Type: hierarchical vs. non-hierarchical) × 2 (Social Hierarchy Sequence: match vs. mismatch) design was constructed to investigate the effect of the interaction on early and late eye-tracking measures. Thirty-two participants (15 males; age range: 18–24 years) read sentences and judged the plausibility of each sentence. The results showed that violations of semantic selectional restrictions of Chinese social hierarchical verbs induced shorter first fixation duration but longer regression path duration and longer total reading time on sentence-final nouns (NP2). These differences were absent under non-hierarchical conditions. The results suggest that a mismatch between linguistic and extralinguistic social hierarchical information is immediately detected and processed.
The embodied view of semantic processing holds that readers achieve reading comprehension through mental simulation of the objects and events described in the narrative. However, it remains unclear whether and how the encoding of linguistic factors in narrative descriptions impacts narrative semantic processing. This study aims to explore this issue under the narrative context with and without perspective shift, which is an important and common linguistic factor in narratives. A sentence-picture verification paradigm combined with eye-tracking measures was used to explore the issue. The results showed that (1) the inter-role perspective shift made the participants’ to evenly allocate their first fixation to different elements in the scene following the new perspective; (2) the internal–external perspective shift increased the participants’ total fixation count when they read the sentence with the perspective shift; (3) the scene detail depicted in the picture did not influence the process of narrative semantic processing. These results suggest that perspective shift can disrupt the coherence of situation model and increase the cognitive load of readers during reading. Moreover, scene detail could not be constructed by readers in natural narrative reading.
This study used the visual world paradigm to investigate novel word learning in adults from different language backgrounds and the effects of phonology, homophony, and rest on the outcome. We created Mandarin novel words varied by types of phonological contrasts and homophone status. During the experiment, native (n = 34) and non-native speakers (English; n = 30) learned pairs of novel words and were tested twice with a 15-minute break in between, which was spent either resting or gaming. In the post-break test of novel word recognition, an interaction appeared between language backgrounds, phonology, and homophony: non-native speakers performed less accurately than native speakers only on non-homophones learned in pairs with tone contrasts. Eye movement data indicated that non-native speakers’ processing of tones may be more effortful than their processing of segments while learning homophones, as demonstrated by the time course. Interestingly, no significant effects of rest were observed across language groups; yet after gaming, native speakers achieved higher accuracy than non-native speakers. Overall, this study suggests that Mandarin novel word learning can be affected by participants’ language backgrounds and phonological and homophonous features of words. However, the role of short periods of rest in novel word learning requires further investigation.
In this chapter, we discuss the way people read, remember and understand discourse, depending on the type of relations that link discourse segments together. We also illustrate the role of connectives and other discourse signals as elements guiding readers’ interpretation. Throughout the chapter, we review empirical evidence from experiments that involve various methodologies such as offline comprehension tasks, self-paced reading, eye-tracking and event related potentials. One of the major findings is that not all relations are processed and remembered in the same way. It seems that causal relations play a special role for creating coherence in discourse, as they are processed more quickly and remembered better. Conversely, because they are highly expected, causal relations benefit less from the presence of connectives compared to discontinuous relations like concession and confirmation. Finally, research shows that in their native language, speakers are able to take advantage of all sorts of connectives for discourse processing, even those restricted to the written mode, and those that are ambiguous.
This chapter offers a thorough guide to the techniques and instruments used to understand how the brain develops in humans. It covers key learning goals, such as examining how behaviors change as people grow, how studying typical and atypical development inform each other, and what we can and cant learn about brain structure using non-invasive brain scans. It also explains the two main ways we measure brain function. Starting with some back history on methodological tools, this chapter sets the stage for deeper insights into brain development and its impact on our abilities. It highlights the dynamic nature of the field, influenced by both animal studies and rapidly evolving and improving analytical tools and methods. With a focus on methods for studying children, we explore more advanced techniques used in different age groups. Furthermore, this chapter stresses the importance of a scientific mindset and adaptability when new evidence comes to light. It serves as a vital reference for understanding the tools and approaches in developmental cognitive neuroscience.
According to Talmy, in verb-framed languages (e.g., French), the core schema of an event (Path) is lexicalized, leaving the co-event (Manner) in the periphery of the sentence or optional; in satellite-framed languages (e.g., English), the core schema is jointly expressed with the co-event in construals that lexicalize Manner and express Path peripherally. Some studies suggest that such differences are only surface differences that cannot influence the cognitive processing of events, while others support that they can constrain both verbal and non-verbal processing. This study investigates whether such typological differences, together with other factors, influence visual processing and decision-making. English and French participants were tested in three eye-tracking tasks involving varied Manner–Path configurations and language to different degrees. Participants had to process a target motion event and choose the variant that looked most like the target (non-verbal categorization), then describe the events (production), and perform a similarity judgment after hearing a target sentence (verbal categorization). The results show massive cross-linguistic differences in production and additional partial language effects in visualization and similarity judgment patterns – highly dependent on the salience and nature of events and the degree of language involvement. The findings support a non-modular approach to language–thought relations and a fine-grained vision of the classic lexicalization/conflation theory.
Combining adjective meaning with the modified noun is particularly challenging for children under three years. Previous research suggests that in processing noun-adjective phrases children may over-rely on noun information, delaying or omitting adjective interpretation. However, the question of whether this difficulty is modulated by semantic differences among (subsective) adjectives is underinvestigated.
A visual-world experiment explores how Italian-learning children (N=38, 2;4–5;3) process noun-adjective phrases and whether their processing strategies adapt based on the adjective class. Our investigation substantiates the proficient integration of noun and adjective semantics by children. Nevertheless, alligning with previous research, a notable asymmetry is evident in the interpretation of nouns and adjectives, the latter being integrated more slowly. Remarkably, by testing toddlers across a wide age range, we observe a developmental trajectory in processing, supporting a continuity approach to children’s development. Moreover, we reveal that children exhibit sensitivity to the distinct interpretations associated with each subsective adjective.
While the Talmian dichotomy between satellite-framed and verb-framed languages has been amply studied for motion events, it has been less discussed for locative events, even if Talmy considers these to be included in motion events. This paper discusses such locative events, starting from the significant cross-linguistic variation among Dutch, French, and English. Dutch habitually encodes location via cardinal posture verbs (CPVs; ‘SIT’, ‘LIE’, ‘STAND’) expressing the orientation of the Figure, French prefers orientation-neutral existence verbs like être ‘be’ and English – unlike for motion events – straddles the middle with a marked preference for be but the possibility to occasionally rely on CPVs. Through the analysis of recognition performances and gazing behaviours in a non-verbal recognition task, this study confirms a (subtle) cognitive impact of different linguistic preferences on the mental representation of locative events. More specifically, they confirm the continuum suggested by Lemmens (2005, Parcours linguistiques. Domaine anglais (pp. 223–244). Publications de l’Université St Etienne.) for the domain of location with French on the one extreme and Dutch on the other with English in-between, behaving like French in some contexts but like Dutch in others.
This chapter describes how people read and interpret ironical language. Tracking people’s rapid eye movements as they read can be an informative measure of the underling cognitive and linguistic processes operating during online written language comprehension. Attardo introduces some of the technologies employed in measuring eye movements during reading and suggests why these assessments can provide critical insights into how irony interpretation rapidly unfolds word-by-word as one reads. He reviews various experimental studies on irony and sarcasm understanding that provide explicit empirical tests of different theories of irony (e.g., multistate models, graded salience, parallel-constraint models predictive processing models). He also explores what the study of eye tracking reveals about the influence of contextual factors and individual differences in irony interpretation, as well as the phenomenon known as “gaze aversion” when listeners momentarily look away from speakers’ faces when hearing ironic language. Attardo closes his chapter with an important discussion of the sometimes contentious relations between psycholinguistic experiments and philosophical arguments on the ways people use and interpret irony in discourse.
Using the visual world paradigm with printed words, this study investigated the flexibility and representational nature of phonological prediction in real-time speech processing. Native speakers of Mandarin Chinese listened to spoken sentences containing highly predictable target words and viewed a visual array with a critical word and a distractor word on the screen. The critical word was manipulated in four ways: a highly predictable target word, a homophone competitor, a tonal competitor, or an unrelated word. Participants showed a preference for fixating on the homophone competitors before hearing the highly predictable target word. The predicted phonological information waned shortly but was re-activated later around the acoustic onset of the target word. Importantly, this homophone bias was observed only when participants were completing a ‘pronunciation judgement’ task, but not when they were completing a ‘word judgement’ task. No effect was found for the tonal competitors. The task modulation effect, combined with the temporal pattern of phonological pre-activation, indicates that phonological prediction can be flexibly generated by top-down mechanisms. The lack of tonal competitor effect suggests that phonological features such as lexical tone are not independently predicted for anticipatory speech processing.
The present study asked whether oral vocabulary training can facilitate reading in a second language (L2). Fifty L2 speakers of English received oral training over three days on complex novel words, with predictable and unpredictable spellings, composed of novel stems and existing suffixes (i.e., vishing, vishes, vished). After training, participants read the novel word stems for the first time (i.e., trained and untrained), embedded in sentences, and their eye movements were monitored. The eye-tracking data revealed shorter looking times for trained than untrained stems, and for stems with predictable than unpredictable spellings. In contrast to monolingual speakers of English, the interaction between training and spelling predictability was not significant, suggesting that L2 speakers did not generate orthographic skeletons that were robust enough to affect their eye-movement behaviour when seeing the trained novel words for the first time in print.
The ability to process plural marking of nouns is acquired early: at a very young age, children are able to understand if a noun represents one item or more than one. However, little is known about how the segmental characteristics of plural marking are used in this process. Using eye-tracking, we aim at understanding how five to twelve-year old children use the phonetic, phonological, and morphological information available to process noun plural marking in German (i.e., a very complex system) compared to adults. We expected differences with stem vowels, stem-final consonants or different suffixes, alone or in combination, reflecting different processing of their segmental information. Our results show that for plural processing: 1) a suffix is the most helpful cue, an umlaut the least helpful, and voicing does not play a role; 2) one cue can be sufficient and 3) school-age children have not reached adult-like processing of plural marking.
Studies on sentence processing in inflectional languages support that syntactic structure building functionally precedes semantic processing. Conversely, most EEG studies of Chinese sentence processing do not support the priority of syntax. One possible explanation is that the Chinese language lacks morphological inflections. Another explanation may be that the presentation of separate sentence components on individual screens in EEG studies disrupts syntactic framework construction during sentence reading. The present study investigated this explanation using a self-paced reading experiment mimicking rapid serial visual presentation in EEG studies and an eye-tracking experiment reflecting natural reading. In both experiments, Chinese ‘ba’ sentences were presented to Chinese young adults in four conditions that differed across the dimensions of syntactic and semantic congruency. Evidence supporting the functional priority of syntax over semantics was limited to only the natural reading context, in which syntactic violations blocked the processing of semantics. Additionally, we observed a later stage of integrating plausible semantics with a failed syntax. Together, our findings extend the functional priority of syntax to the Chinese language and highlight the importance of adopting more ecologically valid methods when investigating sentence reading.
In a recent study, Fernandez et al. (2021) investigated parafoveal processing in L1 English and L1 German–L2 English readers using the gaze contingent boundary paradigm (Rayner, 1975). Unexpectedly, L2 readers derived an interference from a non-cognate translation parafoveal mask (arrow vs. pfeil), but derived a benefit from a German orthographic parafoveal mask (arrow vs. pfexk) when reading in English. The authors argued that bilingual readers incurred a switching cost from the complete German word, and derived a benefit by keeping both lexicons active from the partial German word. In this registered report, we further test this finding with L1 German–L2 English participants using improved items, but with the sentences presented in German. We were able to replicate the non-cognate translation interference but not the orthographic facilitation. Follow up comparisons showed that all parafoveal masks evoked similar inhibition, suggesting that bilingual readers do not process non-cognate semantic or orthographic information parafoveally.