We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Bilingual infants rely differently than monolinguals on facial information, such as lip patterns, to differentiate their native languages. This may explain, at least in part, why young monolinguals and bilinguals show differences in social attention. For example, in the first year, bilinguals attend faster and more often to static faces over non-faces than do monolinguals (Mercure et al., 2018). However, the developmental trajectories of these differences are unknown. In this pre-registered study, data were collected from 15- to 18-month-old monolinguals (English) and bilinguals (English and another language) to test whether group differences in face-looking behaviour persist into the second year. We predicted that bilinguals would orient more rapidly and more often to static faces than monolinguals. Results supported the first but not the second hypothesis. This suggests that, even into the second year of life, toddlers’ rapid visual orientation to static social stimuli is sensitive to early language experience.
Several decision-making models predict that it should be possible to affect real binary choices by manipulating the relative amount of visual attention that decision-makers pay to the two alternatives. We present the results of three behavioral experiments testing this prediction. Visual attention is controlled by manipulating the amount of time subjects fixate on the two items. The manipulation has a differential impact on appetitive and aversive items. Appetitive items are 6 to 11% more likely to be chosen in the long fixation condition. In contrast, aversive items are 7% less likely to be chosen in the long fixation condition. The effect is present for primary goods, such as foods, and for higher-order durable goods, such as posters.
We investigated cross-modal influences between speech and sign in hearing bimodal bilinguals, proficient in a spoken and a sign language, and its consequences on visual attention during message preparation using eye-tracking. We focused on spatial expressions in which sign languages, unlike spoken languages, have a modality-driven preference to mention grounds (big objects) prior to figures (smaller objects). We compared hearing bimodal bilinguals’ spatial expressions and visual attention in Dutch and Dutch Sign Language (N = 18) to those of their hearing non-signing (N = 20) and deaf signing peers (N = 18). In speech, hearing bimodal bilinguals expressed more ground-first descriptions and fixated grounds more than hearing non-signers, showing influence from sign. In sign, they used as many ground-first descriptions as deaf signers and fixated grounds equally often, demonstrating no influence from speech. Cross-linguistic influence of word order preference and visual attention in hearing bimodal bilinguals appears to be one-directional modulated by modality-driven differences.
Visual cognitive processes have traditionally been examined with simplified stimuli, but generalization of these processes to the real-world is not always straightforward. Using images, computer-generated images, and virtual environments, researchers have examined processing of visual information in the real-world. Although referred to as scene perception, this research field encompasses many aspects of scene processing. Beyond the perception of visual features, scene processing is fundamentally influenced and constrained by semantic information as well as spatial layout and spatial associations with objects. In this review, we will present recent advances in how scene processing occurs within a few seconds of exposure, how scene information is retained in the long-term, and how different tasks affect attention in scene processing. By considering the characteristics of real-world scenes, as well as different time windows of processing, we can develop a fuller appreciation for the research that falls under the wider umbrella of scene processing.
Williams syndrome (WS) is a rare genetic disorder caused by a deletion at chromosome 7q1123. WS is associated with high empathy, relatively good face memory and low social anxiety. Despite these strengths, WS individuals typically have an intellectual disability, difficulties with visuospatial perception, non-social anxiety and complex social cognition. Attention to other’s eyes is crucial for adaptive social understanding. Consequently, eyes trigger quick and automatic gaze shifts in typically developing individuals. It is not known whether this process is atypical in WS.
Objectives
To examine visual attention to other’s eyes in Williams syndrome.
Methods
Individuals with WS (n = 35; mean age 23.5 years) were compared to controls (n = 167) in stratified age groups (7 month, 8-12 years, 13-17 years, adults). Participants were primed to look at either the eyes or the mouth of human faces. The latency and likelihood of a first gaze shift from, or to the eyes, was measured with eye tracking.
Results
WS individuals were less likely, and slower to orient to the eyes than typically developing controls in all age groups from eight years of age (all p <.001), but did not differ from 7 months old infants. In contrast to healthy individuals from eight years and above, WS individuals did not show a preference to orient towards the eyes relative to the mouth.
Conclusions
Despite the hyper-social behavioral phenotype, WS is associated with reduced attention to other’s eyes during early stages of processing. This could contribute to the difficulties with complex social cognition observed in this group.
Humans can focus their attention narrowly (e.g., to read this text) or broadly (e.g., to determine which way a large crowd of people are moving). This Element comprehensively considers attentional breadth. Section 1 introduces the concept of attentional breadth, while Section 2 considers measures of attentional breadth. In particular, this section provides a critical discussion of the types of psychometric evidence which should be sought to establish the validity of measures of attentional breadth and reviews the available evidence through this lens. Section 3 considers the visual task performance consequences of attentional breadth, including prescribing several key methodological criteria that studies that manipulate attentional breadth need to meet, as well as a discussion of relevant theories and avenues for future theoretical development. Section 4 discusses the utility of the exogenous–endogenous distinction from covert shifts of attention for understanding the performance consequences of attentional breadth. Finally, Section 5 provides concluding remarks.
Social-communication skills emerge within the context of rich social interactions, facilitated by an infant's capacity to attend to people and objects in the environment. Disruption in this early neurobehavioral process may decrease the frequency and quality of social interactions and learning opportunities, potentially leading to downstream deleterious effects on social development. This study examined early attention in infant siblings of children with autism spectrum disorder (ASD) who are at risk for social and communication delays. Visual and auditory attention was mapped from age 1 week to 5 months in infants at familial risk for ASD (high risk; N = 41) and low-risk typically developing infants (low risk; N = 39). At 12 months, a subset of participants (N = 40) was administered assessments of social communication and nonverbal cognitive skills. Results revealed that high-risk infants performed lower on attention tasks at 2 and 3 months of age compared to low-risk infants. A significant association between overall attention at 3 months and developmental outcome at 12 months was observed for both groups. These results provide evidence for early vulnerabilities in visual attention for infants at risk for ASD during a period of important neurodevelopmental transition (between 2 and 3 months) when attention has significant implications for social communication and cognitive development.
Attention-deficit/hyperactivity disorder (ADHD) is a neurodevelopmental disorder which frequently persists into adulthood. The primary goal of the current study was to (a) investigate attentional functions of stimulant medication-naïve adults with ADHD, and (b) investigate the effects of 6 weeks of methylphenidate treatment on these functions.
Methods
The study was a prospective, non-randomized, non-blinded, 6-week follow-up design with 42 stimulant medication-naïve adult patients with ADHD, and 42 age and parental education-matched healthy controls. Assessments included measures of visual attention, based on Bundesen's Theory of Visual Attention (TVA), which yields five precise measures of aspects of visual attention; general psychopathology; ADHD symptoms; dyslexia screening; and estimates of IQ.
Results
At baseline, significant differences were found between patients and controls on three attentional parameters: visual short-term memory capacity, threshold of conscious perception, and to a lesser extent visual processing speed. Secondary analyses revealed no significant correlations between TVA parameter estimates and severity of ADHD symptomatology. At follow-up, significant improvements were found specifically for visual processing speed; this improvement had a large effect size, and remained when controlling for re-test effects, IQ, and dyslexia screen performance. There were no significant correlations between changes in visual processing speed and changes in ADHD symptomatology.
Conclusions
ADHD in adults may be associated with deficits in three distinct aspects of visual attention. Improvements after 6 weeks of medication are seen specifically in visual processing speed, which could represent an improvement in alertness. Clinical symptoms and visual attentional deficits may represent separate aspects of ADHD in adults.
Objectives: Healthy individuals often have a leftward and upward attentional spatial bias; however, there is a reduction of this leftward bias with aging. The right hemisphere mediates leftward spatial attention and age-related reduction of right hemispheric activity may account for this reduced leftward bias. The right hemisphere also appears to be responsible for upward bias, and this upward bias might reduce with aging. Alternatively, whereas the dorsal visual stream allocates attention downward, the ventral stream allocates attention upward. Since with aging there is a greater atrophy of the dorsal than ventral stream, older participants may reveal a greater upward bias. The main purpose of this study was to learn if aging influences the vertical allocation of spatial attention. Methods: Twenty-six young (17 males; mean age 44.62±2.57 years) and 25 healthy elderly (13 males; mean age 72.04±.98 years), right-handed adults performed line bisections using 24 vertical lines (24 cm long and 2 mm thick) aligned with their midsagittal plane. Results: Older adults had a significantly greater upward bias than did younger adults. Conclusions: Normal upward attentional bias increases with aging, possibly due to an age-related reduction of the dorsal attentional stream that is responsible for the allocation of downward attention. (JINS, 2018, 24, 1121–1124)
Embodied theories of language posit that the human brain’s adaptations for language exploit pre-existing perceptual and motor mechanisms for interacting with the world. In this paper we propose an embodied account of the linguistic distinction between singular and plural, encoded in the system of grammatical number in many of the world’s languages. We introduce a neural network model of visual object classification and spatial attention, informed by a collection of findings in psychology and neuroscience. The classification component of the model computes the type associated with a visual stimulus without identifying the number of objects present. The distinction between singular and plural is made by a separate mechanism in the attentional system, which directs the classifier towards the local or global features of the stimulus. The classifier can directly deliver the semantics of uninflected concrete noun stems, while the attentional mechanism can directly deliver the semantics of singular and plural number features.
In this paper we examine how vague quantifiers, such as few, several, lots of, map onto non-linguistic number systems. In particular our focus is to examine how judgements about vague quantifiers are affected by the presence of objects in visual scenes other than those being referred to. An experiment is presented that manipulated the number of objects in a visual scene (men playing golf; the ‘focus’ objects) together with the number of other objects in those scenes and their similarity—in terms of form (women or crocodiles) and function (playing golf, not playing golf)—to the focus objects. We show that the number of other objects in a scene impacts upon quantifiers judgements even when those objects are in a different category to the focus objects. We discuss the results in terms of the mapping between the large approximate number (estimation) system and language.
Using eye-tracking as a window on cognitive processing, this study investigates language effects on attention to motion events in a non-verbal task. We compare gaze allocation patterns by native speakers of German and Modern Standard Arabic (MSA), two languages that differ with regard to the grammaticalization of temporal concepts. Findings of the non-verbal task, in which speakers watch dynamic event scenes while performing an auditory distracter task, are compared to gaze allocation patterns which were obtained in an event description task, using the same stimuli. We investigate whether differences in the grammatical aspectual systems of German and MSA affect the extent to which endpoints of motion events are linguistically encoded and visually processed in the two tasks. In the linguistic task, we find clear language differences in endpoint encoding and in the eye-tracking data (attention to event endpoints) as well: German speakers attend to and linguistically encode endpoints more frequently than speakers of MSA. The fixation data in the non-verbal task show similar language effects, providing relevant insights with regard to the language-and-thought debate. The present study is one of the few studies that focus explicitly on language effects related to grammatical concepts, as opposed to lexical concepts.
Nutrition information aims to reduce information asymmetries between manufacturers and consumers. To date, however, it remains unclear how nutrition information that is shown on the front of the packaging should be designed in order to increase both visual attention and the tendency to make healthful food choices. The present study aimed to address this gap in research.
Design
An experimental laboratory study applying mobile eye-tracking technology manipulated the presence of two directive cues, i.e. health marks and traffic light colour-coding, as part of front-of-package nutrition information on actual food packages.
Setting
Participants wore mobile eye-tracking glasses during a simulated shopping trip. After the ostensible study had finished, they chose one snack (from an assortment of fifteen snacks) as a thank you for participation. All products were labelled with nutrition information according to the experimental condition.
Subjects
Consumers (n 160) who were mainly responsible for grocery shopping in their household participated in the study.
Results
The results showed that, in the absence of traffic light colouring, health marks reduced attention to the snack food packaging. This effect did not occur when the colouring was present. The combination of the two directive cues (v. presenting traffic light colours only) made consumers choose more healthful snacks, according to the nutrient profile.
Conclusions
Public policy makers may recommend retailers and manufacturers implement consistent front-of-pack nutrition labelling that contains both health marks and traffic light colouring as directive cues. The combination of the cues may increase the likelihood of healthful decision making.
The psychometric properties of a Binocular Rivalry (BR)-based test on a group of 159 participants (57 with attention deficit hyperactivity disorder, ADHD) aged between 6 and 15 years are presented. Two factors, which explained 56.82% of the variance, were obtained by exploratory factor analysis: (a) Alternations and Duration of exclusive dominances, and (b) Decision time. Reliability was excellent (Cronbach’s α = .834 and .884). The ADHD group showed fewer alternations and longer duration of dominances and decision time than the control group. Correlations between measures of BR, IQ, working memory, and processing speed of the WISC-IV, and ADHD symptoms, assessed by parents and teachers, ranged between low and medium.
This chapter outlines behavioral measures related to the control of attention and functional theories of attention based on such measures. It focuses on the control of visual attention in both normal and neurologically impaired individuals. The major types of attention are: spatial attention, in which stimuli are selected based on their position in space; object-based attention, in which stimuli are selected based on their identity; attentional selection in visual working memory, in which attention selects items that will be remembered; and executive attention, in which attention is involved in choosing which task or behavior an observer will perform. The chapter provides evidence for a number of cerebral sites that appear to be involved in the overall control of attention. Understanding how these sites interact and how they relate to functional theories of attentional control increases understanding of normal and disordered attentional control processes.
Functional specialization in the lower and upper visual fields in humans is analyzed in relation to the origins of the primate visual system. Processing differences between the vertical hemifields are related to the distinction between near (peripersonal) and far (extrapersonal) space, which are biased toward the lower and upper visual fields, respectively. Nonlinear/global processing is required in the lower visual field in order to pergeive the optically degraded and diplopic images in near vision, whereas objects in far vision are searched for and recognized primarily using linear/local perceptual mechanisms. The functional differences between near and far visual space are correlated with their disproportionate representations in the dorsal and ventral divisions of visual association cortex, respectively, and in the magnocellular and parvocellular pathways that project to them. Advances in far visual capabilities and forelimb manipulatory skills may have led to a significant enhancement of these functional specializations.
Patients with visual neglect (VN) tend to start cancellation tasks from the right. This exceptional initial rightward bias is also seen in some right hemisphere (RH) stroke patients who do not meet the criteria of VN in conventional tests. The present study compared RH infarct patients’ (examined on average 4 days post-stroke) and healthy controls’ starting points (SPs) in three cancellation tasks of the Behavioural Inattention Test (BIT). Furthermore, task-specific guideline values were defined for a normal SP to differentiate the performance of healthy subjects from that of patients with subclinical inattention. Conventional tests indicated that 15 of the 70 RH infarct patients had VN. The control group comprised 44 healthy volunteers. In each task, the VN group started the cancellations mainly from the right. The non-neglect and healthy groups initiated most cancellations from the left, more so in the healthy group. Starting more than one BIT task outside the guideline value indicated pathological inattention, as this was typical among the VN patients, but exceptional among the healthy subjects. One-third of the non-neglect patients showed pathological inattention by starting more than one task outside the guideline value. Clinical assessment of VN should, therefore, include an evaluation of the SPs to detect this subtle form of neglect. (JINS, 2010, 16, 902–909.)
In the present study we investigated consumers’ visual attention to nutrition information on food products using an indirect instrument, an eye tracker. In addition, we looked at whether people with a health motivation focus on nutrition information on food products more than people with a taste motivation.
Design
Respondents were instructed to choose one of five cereals for either the kindergarten (health motivation) or the student cafeteria (taste motivation). The eye tracker measured their visual attention during this task. Then respondents completed a short questionnaire.
Setting
Laboratory of the ETH Zurich, Switzerland.
Subjects
Videos and questionnaires from thirty-two students (seventeen males; mean age 24·91 years) were analysed.
Results
Respondents with a health motivation viewed the nutrition information on the food products for longer and more often than respondents with a taste motivation. Health motivation also seemed to stimulate deeper processing of the nutrition information. The student cafeteria group focused primarily on the other information and did this for longer and more often than the health motivation group. Additionally, the package design affected participants’ nutrition information search.
Conclusions
Two factors appear to influence whether people pay attention to nutrition information on food products: their motivation and the product’s design. If the package design does not sufficiently facilitate the localization of nutrition information, health motivation can stimulate consumers to look for nutrition information so that they may make a more deliberate food choice.
As a step toward understanding the mechanism by which targets are selected for smooth-pursuit eye movements, we examined the behavior of the pursuit system when monkeys were presented with two discrete moving visual targets. Two rhesus monkeys were trained to select a small moving target identified by its color in the presence of a moving distractor of another color. Smooth-pursuit eye movements were quantified in terms of the latency of the eye movement and the initial eye acceleration profile. We have previously shown that the latency of smooth pursuit, which is normally around 100 ms, can be extended to 150 ms or shortened to 85 ms depending on whether there is a distractor moving in the opposite or same direction, respectively, relative to the direction of the target. We have now measured this effect for a 360 deg range of distractor directions, and distractor speeds of 5–45 deg/s. We have also examined the effect of varying the spatial separation and temporal asynchrony between target and distractor. The results indicate that the effect of the distractor on the latency of pursuit depends on its direction of motion, and its spatial and temporal proximity to the target, but depends very little on the speed of the distractor. Furthermore, under the conditions of these experiments, the direction of the eye movement that is emitted in response to two competing moving stimuli is not a vectorial combination of the stimulus motions, but is solely determined by the direction of the target. The results are consistent with a competitive model for smooth-pursuit target selection and suggest that the competition takes place at a stage of the pursuit pathway that is between visual-motion processing and motor-response preparation.
These studies provide evidence for slowed spatial orienting of attention in autism. A group of well-defined adult autistic subjects and age-matched normal controls performed a traditional spatial cueing task in which attention-related response facilitation is indexed by speed of target detection. To address the concern that motor impairment may interfere with interpretation of response time measures in those with neurologic abnormality, we also used a new adaptation of the traditional task that depended on accuracy of response (target discrimination) rather than speed of response. This design allowed separation of time to process and respond to target information from the time to move and engage (orient) attention. Results from both tasks were strikingly similar. Normal subjects oriented attention very quickly, and showed maximal performance facilitation at a cued location within 100 ms. Autistic subjects oriented attention much more slowly and showed increasing benefits of a spatial cue with increasing cue-to-target delays. These results are consistent with previous reports that patients with autism, the majority of whom have developmental abnormalities of the cerebellum, as well as those with acquired damage to the cerebellum, are slow to shift attention between and within modalities. This paper also addresses the variability in behavioral findings in autism, and suggests that many of the apparently contradictory findings may actually reflect sampling differences in patterns of brain pathology. (JINS, 1996, 2, 541–550.)