Part 3 - Modeling gesture performance
Published online by Cambridge University Press: 07 January 2010
Summary
The chapters in this part agree that modeling sets a standard for theories of gesture performance. If a process is well understood, it should be possible to design a model of it. If the model is computational, there is the further possibility of actually running the model and comparing its output to nature – observed gestures and speech in our case. Two models are described in this part. Neither is yet a running computational model, though both have been conceptualized with this goal in mind. The Krauss et al. and de Ruiter models propose ways to add gesture to the Speaking model presented by Levelt in 1989, a computer-like information-processing model of speech that did not make provision for gesture performance (see Gigerenzer & Goldstein 1996 for analysis of the Speaking model as part of the tradition of computer-like models in psychology). The Krauss et al. and de Ruiter models, while agreeing on the general framework, differ in a number of details that affect both the scope of the models and their internal organization. The chapters themselves point out these differences. Each chapter can be read in part as a presentation of its model and in part as a critical discussion of the other model. The third chapter, by McNeill, raises a question for both of the other chapters and for information-processing-type models in general. The question concerns how the context of speaking is to be handled. Gestures show that every utterance, even though a seemingly self-contained grammatical unit, incorporates content from outside its own structure (called a ‘catchment’, an extended example of which is described in the chapter).
- Type
- Chapter
- Information
- Language and Gesture , pp. 259 - 260Publisher: Cambridge University PressPrint publication year: 2000