Hostname: page-component-cd9895bd7-mkpzs Total loading time: 0 Render date: 2024-12-27T10:07:01.710Z Has data issue: false hasContentIssue false

How do adults and children process referentially ambiguous pronouns?

Published online by Cambridge University Press:  23 February 2004

IRINA A. SEKERINA
Affiliation:
Department of Psychology, College of Staten Island of the City University of New York
KARIN STROMSWOLD
Affiliation:
Department of Psychology and the Center for Cognitive Science, Rutgers University
ARILD HESTVIK
Affiliation:
University of Bergen, Norway, and the CUNY Graduate School and University Center

Abstract

In two eye-tracking experiments, we investigate adults' and children's on-line processing of referentially ambiguous English pronouns. Sixteen adults and 16 four-to-seven-year-olds listened to sentences with either an unambiguous reflexive (himself) or an ambiguous pronoun (him) and chose a picture with two characters that corresponded to those in the sentence. For adults, behavioural data, responses and reaction times indicate that pronouns are referentially ambiguous. Adults' eye movements show a competition between the looks to sentence-internal and -external referents for pronouns, but not for reflexives. Children overwhelmingly prefer the sentence-internal referent in the off-line picture selection task. However, their eye movements reveal implicit awareness of referential ambiguity that develops earlier than their explicit knowledge in the picture selection task. This discrepancy between performance on a looking measure and a pointing measure in the children's processing system is explained by a general dissociation between implicit and explicit knowledge proposed in recent literature on cognitive development.

Type
Research Article
Copyright
2004 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

This work was partially supported by National Science Foundation grants BCS-4875168 and BCS-0042561 to the second author. We thank the Rutgers University Center for Cognitive Science for use of their eye-tracking facilities provided by the NSF grant. We thank Margaret Sudhakar and Ned Norland for assisting us with preparation and conducting the experiments, Jennifer Venditti for providing her expertise in analysing the materials acoustically, and Gary Winkel for help with the statistical analyses. We also wish to thank the students from Irwing Elementary School (Speech coordinator Ellyn Atherton) and the children from the Yellow Brick Road Preschool (Director Grace Puleo) in Highland Park, NJ, for their help in running the study. We are also grateful to Eva Fernández, Patricia Brooks, Julien Musolino, Jill Razdan, and two anonymous reviewers for very helpful commentary on earlier drafts of this article.