Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-26T09:10:50.563Z Has data issue: false hasContentIssue false

Finite-state multimodal integration and understanding

Published online by Cambridge University Press:  19 May 2005

MICHAEL JOHNSTON
Affiliation:
AT&T Labs – Research, 180 Park Ave, Florham Park, NJ 07932, USA e-mail: johnston@research.att.com
SRINIVAS BANGALORE
Affiliation:
AT&T Labs – Research, 180 Park Ave, Florham Park, NJ 07932, USA e-mail: srini@research.att.com

Abstract

Multimodal interfaces are systems that allow input and/or output to be conveyed over multiple channels such as speech, graphics, and gesture. In addition to parsing and understanding separate utterances from different modes such as speech or gesture, multimodal interfaces also need to parse and understand composite multimodal utterances that are distributed over multiple input modes. We present an approach in which multimodal parsing and understanding are achieved using a weighted finite-state device which takes speech and gesture streams as inputs and outputs their joint interpretation. In comparison to previous approaches, this approach is significantly more efficient and provides a more general probabilistic framework for multimodal ambiguity resolution. The approach also enables tight-coupling of multimodal understanding with speech recognition. Since the finite-state approach is more lightweight in computational needs, it can be more readily deployed on a broader range of mobile platforms. We provide speech recognition results that demonstrate compensation effects of exploiting gesture information in a directory assistance and messaging task using a multimodal interface.

Type
Papers
Copyright
2005 Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)