Article contents
Managing multimodal data in virtual world research for language learning
Published online by Cambridge University Press: 24 January 2018
Abstract
The study of multimodality in communication has attracted the attention of researchers studying online multimodal environments such as virtual worlds. Specifically, 3D virtual worlds have especially attracted the interest of educators and academics due to the multiplicity of verbal channels, which are often comprised of text and voice channels, as well as their 3D graphical interface, allowing for the study of non-verbal modes. This study offers a multilayered transcription method called the Multi-Modal MUVE Method or 3M Method (Palomeque, 2016; Pujolà & Palomeque, 2010) to account for the different modes present in the 3D virtual world of Second Life. This method works at two levels: the macro and the micro level. The macro level is a bird’s-eye view representation of the whole session as it fits into one page. This enables the researcher to grasp the essence of the class and to identify interesting sequences for analysis. The micro level consists of three transcripts to account for the different communication modes as well as the interface activity that occurs in the virtual world of Second Life. This paper will review the challenges when dealing with multimodal analysis in virtual worlds and how the multimodal data were analyzed and interpreted by using a multilayered multimodal method of analysis (3M transcription). Examples will be provided in the study to show how different modes of communication were used by participants in the virtual world of Second Life to create meaning or to avoid communication breakdowns.
- Type
- Regular papers
- Information
- ReCALL , Volume 30 , Special Issue 2: Interactions for language learning in and around virtual worlds , May 2018 , pp. 177 - 195
- Copyright
- Copyright © European Association for Computer Assisted Language Learning 2018
References
- 8
- Cited by