We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Theories of learning and attention predict a positive relationship between reading times on unfamiliar words and their learning; however, empirical findings of contextual learning studies range from a strong positive relationship to no relationship. To test the conjecture that longer reading times may reflect different cognitive and metacognitive processes, the need to infer novel word meanings from context was deliberately manipulated. One hundred and two adult first– and second–language English language speakers read sixty passages containing pseudowords while their eye movements were recorded. The passages were either preceded or followed by pseudoword definitions. After reading, participants completed posttests of cued meaning recall and form recognition. Meaning recall was positively associated with (i) individual cumulative reading times and (ii) participants’ general vocabulary knowledge, but not when definitions were provided before reading. Form recognition was unaffected by cumulative reading times. Our findings call for a cautious approach in making causative links between eye–movement measures and vocabulary learning from reading.
The retrieval of past instances stored in memory can guide inferential choices and judgments. Yet, little process-level evidence exists that would allow a similar conclusion for preferential judgments. Recent research suggests that eye movements can trace information search in memory. During retrieval, people gaze at spatial locations associated with relevant information, even if the information is no longer present (the so-called ‘looking-at-nothing’ behavior). We examined eye movements based on the looking-at-nothing behavior to explore memory retrieval in inferential and preferential judgments. In Experiment 1, participants assessed their preference for smoothies with different ingredients, while the other half gauged another person’s preference. In Experiment 2, all participants made preferential judgments with or without instructions to respond as consistently as possible. People looked at exemplar locations in both inferential and preferential judgments, and both with and without consistency instructions. Eye movements to similar training exemplars predicted test judgments but not eye movements to dissimilar exemplars. These results suggest that people retrieve exemplar information in preferential judgments but that retrieval processes are not the sole determinant of judgments.
A central finding of bilingual research is that cognates – words that share semantic, phonological, and orthographic characteristics across languages – are processed faster than non-cognate words. However, it remains unclear whether cognate facilitation effects are reliant on identical cognates, or whether facilitation simply varies along a continuum of cross-language orthographic and phonological similarity. In two experiments, German–English bilinguals read identical cognates, close cognates, and non-cognates in a lexical decision task and a sentence-reading task while their eye movements were recorded. Participants read the stimuli in their L1 German and L2 English. Converging results found comparable facilitation effects of identical and close cognates vs. non-cognates. Cognate facilitation could be described as a continuous linear effect of cross-language orthographic similarity on lexical decision accuracy and latency, as well as fixation durations. Cross-language phonological similarity modulated the continuous orthographic similarity effect in single word recognition, but not in sentence processing.
The Psychology of Reading reviews what has been learned about skilled reading and dyslexia using research on one of the most important but often overlooked languages and writing systems – Chinese. It provides an overview of the Chinese language and writing systems, discusses what is known about the cognitive and neural processes that support the skilled reading of Chinese, as well as its development and impairment, and describes the computer models that have been developed to understand these topics. It is written in an accessible way to appeal to anyone with an interest in cognitive psychology, language, or education.
The vertebrate eye allows to capture an enormous amount of detail about the surrounding world which can only be exploited with sophisticated central information processing. Furthermore, vision is an active process due to head and eye movements that enables the animal to change the gaze and actively select objects to investigate in detail. The entire system requires a coordinated coevolution of its parts to work properly. Ray-finned fishes offer a unique opportunity to study the evolution of the visual system due to the high diversity in all of its parts. Here, we are bringing together information on retinal specializations (fovea), central visual centers (brain morphology studies), and eye movements in a large number of ray-finned fishes in a cladistic framework. The nucleus glomerulosus-inferior lobe system is well developed only in Acanthopterygii. A fovea, independent eye movements, and an enlargement of the nucleus glomerulosus-inferior lobe system coevolved at least five times independently within Acanthopterygii. This suggests that the nucleus glomerulosus-inferior lobe system is involved in advanced object recognition which is especially well developed in association with a fovea and independent eye movements. None of the non-Acanthopterygii have a fovea (except for some deep sea fish) or independent eye movements and they also lack important parts of the glomerulosus-inferior lobe system. This suggests that structures for advanced visual object recognition evolved within ray-finned fishes independent of the ones in tetrapods and non-ray-finned fishes as a result of a coevolution of retinal, central, and oculomotor structures.
This study compared patterns of nonselective cross-language activation in L1 and L2 visual word recognition with different-script bilinguals. The aim was to determine (1) whether lexical processing is nonselective in the L1 (as in L2), and (2) if the same cross-linguistic factors affected processing similarly in each language. To examine the time course of activation, eye movements were tracked during lexical decision. Thirty-two Japanese–English bilinguals responded to 250 target words in Japanese and in English. The same participants and items (i.e., cognate translation equivalents) were used to directly compare L1 and L2 processing. Response latencies as well as eye movements representing early and late processing were analyzed using mixed-effects regression modeling. Similar cross-linguistic effects, namely cognate word frequency, phonological similarity, and semantic similarity, were found in both languages. These factors affected processing to different degrees in each language, however. While cognate frequency was significant as early as the first fixation, effects of cross-linguistic phonological and semantic similarity arose later in time. Increased phonological similarity slowed responses in L2 but speeded them in L1, while greater semantic overlap was facilitatory in both languages. Results are discussed from the perspective of the BIA+ model of visual word recognition.
Sentence reading involves constant competition between lexical candidates. Previous research with monolinguals has shown that the neighbours of a read word are inhibited, making their retrieval as a subsequent target more difficult, but the duration of this interference may depend on reading skills. In this study, we examined neighbour priming effects in sentence reading among proficient Norwegian–English bilinguals reading in their L2. We investigated the effects of the distance between prime and target (short vs. long) and the nature of the overlap between the two words (beginning or end), and related these to differences in individual cognitive skills. Our results replicated the inhibition effects found in monolinguals, albeit slightly delayed. Interference between form-related words was affected by the L2 reading skills and, crucially, by the phonological decoding abilities of the bilingual reader. We discuss the results in light of competition models of bilingual reading as well as episodic memory accounts.
The aim of the study was to investigate the coordination of source text comprehension and translation in a sight translation task. The study also sought to determine whether translation strategies influence sight translation performance. Two groups of conference interpreters—professionals and trainees—sight translated English sentences into Polish while their eye movements and performance were monitored. Translation demands were manipulated by the use of either high- or low-frequency critical words in the sentences. Translation experience had no effect on first-pass viewing durations, but experts used shorter re-view durations than trainees (especially in the low-frequency condition). Professionals translated more accurately and with less pausing than trainees. Translation in the high-frequency condition was more accurate and had shorter pauses than in the low-frequency condition. Critical word translation accuracy increased with the translation onset latency (TOL) for individual sentences, and pause durations were relatively short when TOLs were either relatively short or long. Together, these findings indicate that, in sight translation, the initial phase of normal reading for comprehension is followed by phases in which reading and translation co-occur, and that translation strategy and translation performance are linked.
This study examined the processing and acquisition of novel words and their collocates (i.e., words that frequently co-occur with other words) from reading and the effect of frequency of exposure on this process. First and second language speakers of English read a story with 1) eight exposures of adjective-pseudoword collocations, 2) four exposures of the same collocations, or 3) eight exposures of control collocations. Results of recall and recognition tests showed that participants acquired knowledge not only of the form and meaning of the pseudowords but also of their collocates. The analysis of eye movements showed a significant effect of exposure on the processing of novel collocations for both first and second language readers, with reading times decreasing as a function of exposure. Eight exposures to novel adjective-pseudoword collocations were enough to develop processing speed comparable to that of known collocations. However, when analyzing the processing of the individual components of the collocations, results showed that eight exposures to the pseudowords were not enough for second language readers to develop processing speed comparable to known words. The frequency manipulation in the present study (four vs. eight exposures) did not lead to differences in the learning or processing of collocations. Finally, reading times were not a significant predictor of vocabulary gains.
The comprehension of Spanish verbal future and past tense of children with developmental language disorder (DLD) was evaluated in an eye-tracking experiment with 96 Spanish- and Catalan-speaking participants distributed in 4 groups: 24 children with DLD (Mage 7.8 years), 24 children with the same chronological age (Mage 7.8), 24 children with the same linguistic level (Mage 6.8 years), and 24 adults (Mage 22.5 years). Empirical data revealed that children with DLD can comprehend verbal tense, at least in the present experimental conditions. Based on the empirical results and despite some minor differences between the DLD group and the chronological control group, we suggest that tense morphology comprehension in DLD might be more typical than what is generally considered. Additionally, we propose that verbal comprehension difficulties in children with DLD might be less related to the lack of understanding of specific morphological markers, and more to an accumulation of difficulty which leads to a linguistic processing slowdown.
High subtitle speed undoubtedly impacts the viewer experience. However, little is known about how fast subtitles might impact the reading of individual words. This article presents new findings on the effect of subtitle speed on viewers’ reading behavior using word-based eye-tracking measures with specific attention to word skipping and rereading. In multimodal reading situations such as reading subtitles in video, rereading allows people to correct for oculomotor error or comprehension failure during linguistic processing or integrate words with elements of the image to build a situation model of the video. However, the opportunity to reread words, to read the majority of the words in the subtitle and to read subtitles to completion, is likely to be compromised when subtitles are too fast. Participants watched videos with subtitles at 12, 20, and 28 characters per second (cps) while their eye movements were recorded. It was found that comprehension declined as speed increased. Eye movement records also showed that faster subtitles resulted in more incomplete reading of subtitles. Furthermore, increased speed also caused fewer words to be reread following both horizontal eye movements (likely resulting in reduced lexical processing) and vertical eye movements (which would likely reduce higher-level comprehension and integration).
This study investigated how semantically relevant auditory information might affect the reading of subtitles, and if such effects might be modulated by the concurrent video content. Thirty-four native Chinese speakers with English as their second language watched video with English subtitles in six conditions defined by manipulating the nature of the audio (Chinese/L1 audio vs. English/L2 audio vs. no audio) and the presence versus absence of video content. Global eye-movement analyses showed that participants tended to rely less on subtitles with Chinese or English audio than without audio, and the effects of audio were more pronounced in the presence of video presentation. Lexical processing of subtitles was not modulated by the audio. However, Chinese audio, which presumably obviated the need to read the subtitles, resulted in more superficial post-lexical processing of the subtitles relative to either the English or no audio. On the contrary, English audio accentuated post-lexical processing of the subtitles compared with Chinese audio or no audio, indicating that participants might use English audio to support subtitle reading (or vice versa) and thus engaged in deeper processing of the subtitles. These findings suggest that, in multimodal reading situations, eye movements are not only controlled by processing difficulties associated with properties of words (e.g., their frequency and length) but also guided by metacognitive strategies involved in monitoring comprehension and its online modulation by different information sources.
Very little is known about the processes underlying second language (L2) speakers’ understanding of written metaphors and similes. Moreover, most of the theories on figurative language comprehension do not consider reader-related factors. In the study, we used eye-tracking to examine how native Finnish speakers (N = 63) read written English nominal metaphors (“education is a stairway”) and similes (“education is like a stairway”). Identical topic–vehicle pairs were used in both conditions. After reading, participants evaluated familiarity of each pair. English proficiency was measured using the Bilingual-language Profile Questionnaire and the Lexical Test for Advanced Learners of English. The results showed that readers were more likely to regress within metaphors than within similes, indicating that processing metaphors requires more processing effort than processing similes. The familiarity of a metaphor and L2 English proficiency modulated this effect. The results are discussed in the light of current theories on figurative language processing.
Visual cognitive processes have traditionally been examined with simplified stimuli, but generalization of these processes to the real-world is not always straightforward. Using images, computer-generated images, and virtual environments, researchers have examined processing of visual information in the real-world. Although referred to as scene perception, this research field encompasses many aspects of scene processing. Beyond the perception of visual features, scene processing is fundamentally influenced and constrained by semantic information as well as spatial layout and spatial associations with objects. In this review, we will present recent advances in how scene processing occurs within a few seconds of exposure, how scene information is retained in the long-term, and how different tasks affect attention in scene processing. By considering the characteristics of real-world scenes, as well as different time windows of processing, we can develop a fuller appreciation for the research that falls under the wider umbrella of scene processing.
The difficulties in social interaction present in individuals with autism spectrum conditions may are related with the abnormal attentional processing of emotional information. Specifically, it has been hypothesized that the hypersensibility to threat shown by individuals with autism may explain an avoidance behaviour. However, this hypothesis is not supported by research and the underlying psychological mechanisms of social interaction in autism still unclear.
Objectives
The aim of the present study was to examine attentional processing biases by administering a computer-based attentional task in a sample of 27 children with autism spectrum conditions and 25 typically developed participants (age 11-15 years).
Methods
The initial orienting of attention, the attention al engagement, and the attentional maintenance to different emotional scenes in competition (i.e. happy, neutral, threatening and sad) were measured by recording the eye movements during a 20 seconds free-viewing task.
Results
The main findings were: i) children with autism spectrum conditions showed an initial orientating bias towards threatening stimuli; and ii) while typically developed children revealed an attentional engagement and attentional maintenance bias towards threatening stimuli, children with autism spectrum conditions did not.
Conclusions
The findings of the present study are consistent with the affective information processing theories and shed light on the underlying mechanisms of social disturbances in autism spectrum conditions.
Most empirical studies about chess have taken the happenstance of the cognitive or experimental paradigm within psychology. In this chapter, the past main research findings from this approach will be reviewed together with their contribution to psychological science. The chapter is structured into three main subsections, perception, memory, and thinking. Each of these sections describe more specific themes such as information processing models, eye movements, theories of memory in chess, and thinking methods such as pattern recognition and search. The main conclusions from this extensive body of research are summarized through the prism of the individual differences approach.
The present study investigated whether 4- and 5-year-old Mandarin-speaking children are able to process garden-path constructions in real time when the working memory burden associated with revision and reanalysis is kept to minimum. In total, 25 4-year-olds, 25 5-year-olds, and 30 adults were tested using the visual-world paradigm of eye tracking. The obtained eye gaze patterns reflect that the 4- and 5-year-olds, like the adults, committed to an initial misinterpretation and later successfully revised their initial interpretation. The findings show that preschool children are able to revise and reanalyze their initial commitment and then arrive at the correct interpretation using the later-encountered linguistic information when processing the garden-path constructions in the current study. The findings also suggest that although the 4-year-olds successfully processed the garden-path constructions in real time, they were not as effective as the 5-year-olds and the adults in revising and reanalyzing their initial mistaken interpretation when later encountering the critical linguistic cue. Taken together, our findings call for a fine-grained model of child sentence processing.
Depression is a challenge to diagnose reliably and the current gold standard for trials of DSM-5 has been in agreement between two or more medical specialists. Research studies aiming to objectively predict depression have typically used brain scanning. Less expensive methods from cognitive neuroscience may allow quicker and more reliable diagnoses, and contribute to reducing the costs of managing the condition. In the current study we aimed to develop a novel inexpensive system for detecting elevated symptoms of depression based on tracking face and eye movements during the performance of cognitive tasks.
Methods
In total, 75 participants performed two novel cognitive tasks with verbal affective distraction elements while their face and eye movements were recorded using inexpensive cameras. Data from 48 participants (mean age 25.5 years, standard deviation of 6.1 years, 25 with elevated symptoms of depression) passed quality control and were included in a case-control classification analysis with machine learning.
Results
Classification accuracy using cross-validation (within-study replication) reached 79% (sensitivity 76%, specificity 82%), when face and eye movement measures were combined. Symptomatic participants were characterised by less intense mouth and eyelid movements during different stages of the two tasks, and by differences in frequencies and durations of fixations on affectively salient distraction words.
Conclusions
Elevated symptoms of depression can be detected with face and eye movement tracking during the cognitive performance, with a close to clinically-relevant accuracy (~80%). Future studies should validate these results in larger samples and in clinical populations.
The processing advantage for multiword expressions over novel language has long been attested in the literature. However, the evidence pertains almost exclusively to multiword expression processing in adults. Whether or not other populations are sensitive to phrase frequency effects is largely unknown. Here, we sought to address this gap by recording the eye movements of third and fourth graders, as well as adults (first-language Mandarin) as they read phrases varying in frequency embedded in sentence context. We were interested in how phrase frequency, operationalized as phrase type (collocation vs. control) or (continuous) phrase frequency, and age might influence participants’ reading. Adults read collocations and higher frequency phrases consistently faster than control and lower frequency phrases, respectively. Critically, fourth, but not third, graders read collocations and higher frequency phrases faster than control and lower frequency sequences, respectively, although this effect was largely confined to a late measure. Our results reaffirm phrase frequency effects in adults and point to emerging phrase frequency effects in primary school children. The use of eye tracking has further allowed us to tap into early versus late stages of phrasal processing, to explore different areas of interest, and to probe possible differences between phrase frequency conceptualized as a dichotomy versus a continuum.
Does a concurrent verbal working memory (WM) load constrain cross-linguistic activation? In a visual world study, participants listened to Hindi (L1) or English (L2) spoken words and viewed a display containing the phonological cohort of the translation equivalent (TE cohort) of the spoken word and 3 distractors. Experiment 1 was administered without a load. Participants then maintained two or four letters (Experiment 2) or two, six or eight letters (Experiment 3) in WM and were tested on backward sequence recognition after the visual world display. Greater looks towards TE cohorts were observed in both the language directions in Experiment 1. With a load, TE cohort activation was inhibited in the L2 – L1 direction and observed only in the early stages after word onset in the L1 – L2 direction suggesting a critical role of language direction. These results indicate that cross-linguistic activation as seen through eye movements depends on cognitive resources such as WM.