Book contents
- Frontmatter
- Contents
- Figures
- Contributors
- Acknowledgments
- 1 On representing events – an introduction
- 2 Event representation in serial verb constructions
- 3 The macro-event property
- 4 Event representation, time event relations, and clause structure
- 5 Event representations in signed languages
- 6 Linguistic and non-linguistic categorization of complex motion events
- 7 Putting things in places
- 8 Language-specific encoding of placement events in gestures
- 9 Visual encoding of coherent and non-coherent scenes
- 10 Talking about events
- 11 Absent causes, present effects
- References
- Index
5 - Event representations in signed languages
Published online by Cambridge University Press: 01 March 2011
- Frontmatter
- Contents
- Figures
- Contributors
- Acknowledgments
- 1 On representing events – an introduction
- 2 Event representation in serial verb constructions
- 3 The macro-event property
- 4 Event representation, time event relations, and clause structure
- 5 Event representations in signed languages
- 6 Linguistic and non-linguistic categorization of complex motion events
- 7 Putting things in places
- 8 Language-specific encoding of placement events in gestures
- 9 Visual encoding of coherent and non-coherent scenes
- 10 Talking about events
- 11 Absent causes, present effects
- References
- Index
Summary
Introduction
Signed languages are the natural visual languages of the Deaf, and rely mainly on spatial and body-anchored devices (that is, the body, head, facial expression, eye gaze, and the physical space around the body) for linguistic expression. The affordances of the visual-spatial modality allow signers to give detailed information about the relative location and orientation, motion, and activity of the characters in an event, and to encode this information from certain visual perspectives. In spoken languages, devices such as spatial verbs, locatives, and spatial prepositions also help speakers to situate referents in a discourse context and describe relations among them from certain perspectives (e.g. Taylor and Tversky 1992; Berman and Slobin 1994; Gernsbacher 1997). However, due to modality differences, spatial details about an event can be conveyed in a richer way in signed compared to spoken languages. Furthermore, much spatial information, including visual perspective, is often encoded obligatorily in event predicates of location, motion and activity predicates in signed languages due to the modality.
The purpose of this chapter is to give an account of the way in which a signer's choice of visual perspective interacts with and determines the choice of different types of event predicates in narrative descriptions of complex spatial events. We also ask whether certain types of events (i.e. transitivity) are more or less likely to be expressed by certain perspectives and/or types of predicates.
- Type
- Chapter
- Information
- Event Representation in Language and Cognition , pp. 84 - 107Publisher: Cambridge University PressPrint publication year: 2010
- 1
- Cited by