Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-fbnjt Total loading time: 0 Render date: 2024-11-10T07:50:46.680Z Has data issue: false hasContentIssue false

26 - Gestural Interfaces in Human–Computer Interaction

from Part V - Gestures in Relation to Interaction

Published online by Cambridge University Press:  01 May 2024

Alan Cienki
Affiliation:
Vrije Universiteit, Amsterdam
Get access

Summary

This chapter concerns the use of manual gestures in human–computer interaction (HCI) and user experience research (UX research). Our goal is to empower gesture researchers to conduct meaningful research in these fields. We therefore give special focus to the similarities and differences between HCI research, UX research, and gesture studies when it comes to theoretical framework, relevant research questions, empirical methods, and use cases, i.e. the contexts in which gesture control can be used. As part of this, we touch on the role of various gesture-detecting technologies in conducting this kind of research. The chapter ends with our suggestions for the opportunities gesture researchers have to extend this body of knowledge and add value to the implementation and instantiation of systems with gesture control.

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Ahlström, D., Hasan, K., & Irani, P. (2014). Are you comfortable doing that?: Accept-ance studies of around-device gestures in and for public settings. In Proceedings of the 16th International Conference on Human-Computer Interaction with Mobile Devices & Services, 193202. ACM. https://doi.org/10.1145/2628363.2628381CrossRefGoogle Scholar
All About, UX. (2020, February 1). UX White Paper. www.allaboutux.org/files/UX-WhitePaper.pdfGoogle Scholar
Amazon Go. (2020, February 1). Amazon.com: Amazon Go. www.amazon.com/b?ie=UTF8&node=16008589011Google Scholar
Apple. (2018). US Patent No. 10,048,765 B2. Multi media computing or entertainment system for responding to user presence and activity. https://patentimages.storage.googleapis.com/83/81/93/6026ab7a8f9419/US10048765.pdfGoogle Scholar
Bakker, S., Hausen, D., & Selker, T. (Eds.). (2016). Peripheral Interaction: Challenges and Opportunities for HCI in the Periphery of Attention. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-29523-7CrossRefGoogle Scholar
Bargas-Avila, J. A., & Hornbæk, K. (2011). Old wine in new bottles or novel challenges? A critical analysis of empirical studies of user experience. In Proceedings of CHI 2011, 26892698. ACM. https://doi.org/10.1145/1978942.1979336Google Scholar
Beşevli, C., Buruk, O. T., Erkaya, M., & Özcan, O. (2018). Investigating the effects of legacy bias: User elicited gestures from the end users perspective. In Proceedings of the 2018 ACM Conference Companion Publication on Designing Interactive Systems, 277281. ACM. https://doi.org/10.1145/3197391.3205449CrossRefGoogle Scholar
Beyer, H., & Holtzblatt, K. (1997). Contextual design: A customer-centered approach to systems designs. Morgan Kaufmann Series in Interactive Technologies. San Francisco, CA: Morgan Kaufmann Publishers, Inc. eBook ISBN: 9780080503042.Google Scholar
Bixi. (2020, February 1). Add gesture control to any smart device!. IndieGoGo. www.indiegogo.com/projects/bixi-add-gesture-control-to-any-smart-device#/Google Scholar
Bolt, R. A. (1980). “Put-that-there”: Voice and gesture at the graphics interface. ACM SIGGRAPH Computer Graphics, 14(3), 262270. https://doi.org/10.1145/800250.807503CrossRefGoogle Scholar
Bressem, J., Ladewig, S., & Müller, C. (2013). Linguistic annotation system for gestures. In Müller, C., Cienki, A., Fricke, E., Ladewig, S., McNeill, D., & Teßendorf, S. (Eds.), Body–language–communication: An international handbook on multimodality in human interaction (Vol. 1, pp. 10981124). Berlin. Germany: De Gruyter.Google Scholar
Cabreira, A. T., & Hwang, F. (2015). An analysis of mid-air gestures used across three platforms. In Proceedings of the 2015 British HCI Conference, pp. 257258. ACM. http://dx.doi.org/10.1145/2783446.2783599CrossRefGoogle Scholar
Cabreira, A. T., & Hwang, F. (2016). How do novice older users evaluate and perform mid-air gesture interaction for the first time? In Proceedings of the 9th Nordic Conference on Human–Computer Interaction, 122, pp. 16. ACM. https://doi.org/10.1145/2971485.2996757Google Scholar
Chattopadhyay, D., & Bolchini, D. (2014). Touchless circular menus: Toward an intuitive UI for touchless interactions with large displays. In Proceedings of the 2014 International Working Conference on Advanced Visual Interfaces, 3340. ACM. https://doi.org/10.1145/2598153.2598181CrossRefGoogle Scholar
Crum, P. (2020, February 3.). When the walls can talk – space is the next frontier in personalization. Wired. www.wired.co.uk/article/smart-home-technology-trackingGoogle Scholar
Freeman, E., Brewster, S., & Lantz, Y. (2016) Do that, there: An interaction technique for addressing in-air gesture systems. In Proceedings of CHI ’ 16, 23192331. ACM. https://doi.org/10.1145/2858036.2858308Google Scholar
Frutos-Pascual, M., Creed, C., & Williams, I. (2019). Head mounted display interaction evaluation: Manipulating virtual objects in augmented reality. In Lamas, D. et al. (Eds.), Human–Computer Interaction – INTERACT 2019. Lecture Notes in Computer Science, 11749. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-29390-1_16Google Scholar
Gaver, W. (1991). Technology affordances. In CHI ‘91: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 79–84. ACM. https://doi.org/10.1145/108844.108856CrossRefGoogle Scholar
Gestoos. (2020, February 1). Gesture recognition platform. https://gestoos.com/Google Scholar
Grandhi, S. A., Joue, G., & Mittelberg, I. (2011). Understanding naturalness and intuitiveness in gesture production: Insights for touchless gestural interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 821824. ACM. https://doi.org/10.1145/1978942.1979061CrossRefGoogle Scholar
Groenewald, C., Anslow, C., Islam, J., Rooney, C., Passmore, P., & Wong, W. (2016). Understanding 3D mid-air hand gestures with interactive surfaces and displays: A systematic literature review. In Proceedings of the 30th British HCI Conference. ACM. http://dx.doi.org/10.14236/ewic/HCI2016.43CrossRefGoogle Scholar
Hassemer, J. (2016). Towards a theory of gesture form analysis. Imaginary forms as part of gesture conceptualisation, with empirical support from motion-capture data. (Unpublished doctoral dissertation). Rheinische-Westfälische Technische Hochschule Aachen, Germany.Google Scholar
Havlucu, H., Ergin, M. Y., Bostan, İ. , Buruk, O. T., Göksun, T., & Özcan, O. (2017). It made more sense: Comparison of user-elicited on-skin touch and freehand gesture sets. In International Conference on Distributed, Ambient, and Pervasive Interactions, 159171. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-58697-7_11Google Scholar
Hayo. (2020, February 1). Hayo: Virtual controls for daily life. https://hayo.io/Google Scholar
Henze, N., Löcken, A., Boll, S., Hesselmann, T., & Pielot, M. (2010). Free-hand gestures for music playback: Deriving gestures with a user-centred process. In Proceedings of MUM 10, 110. ACM. https://doi.org/10.1145/1899475.1899491Google Scholar
Hoff, L., Hornecker, E., & Bertel, S. (2016). Modifying gesture elicitation: Do kinaesthetic priming and increased production reduce legacy bias?. In Proceedings of the TEI ’16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction, 8691. ACM. https://doi.org/10.1145/2839462.2839472CrossRefGoogle Scholar
Holmes, K. (2018). Mismatch: How inclusion shapes design. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/11647.001.0001CrossRefGoogle Scholar
HoloLens 2. (2020, February 1). HoloLens 2. Microsoft. www.microsoft.com/en-us/hololens/buyGoogle Scholar
Hoonhout, J., Law, E. L., Roto, V., & Vermeeren, A. (2011). Dagstuhl Seminar Proceedings 10373 Demarcating User eXperience. LZI. http://drops.dagstuhl.de/opus/volltexte/2011/2949Google Scholar
ISO. (2020, February 1). ISO 9241–210. Ergonomics of human-system interaction – Part 110: Dialogue principles. www.iso.org/standard/38009.htmlGoogle Scholar
Karam, M., & schraefel, m. c. (2005). A taxonomy of gestures in human–computer interaction. Retrieved from https://eprints.soton.ac.uk/261149/Google Scholar
Kelley, J. (1984). An iterative design methodology for user-friendly natural language office information applications. In ACM Transactions on Information Systems (TOIS) TOIS Homepage archive, 2(1), 2641. ACM. https://doi.org/10.1145/357417.357420CrossRefGoogle Scholar
Kendon, A. (2004). Gesture: Visible action as utterance. Cambridge, UK: Cambridge University Press. http://doi.org/10.1080/15427580701340790CrossRefGoogle Scholar
Kymera. (2020, February 1). The Kymera Magic Wand. The wand company. www.thewandcompany.com/kymera-wand/Google Scholar
Langacker, R. W. (2008). Cognitive grammar: A basic introduction. Oxford, UK: Oxford University Press.CrossRefGoogle Scholar
Leap. (2020, February 1). Leap Motion controller. www.leapmotion.com/Google Scholar
Lee, M., Billinghurst, M., Baek, W., Green, R., & Woo., W. (2013). A usability study of multimodal input in an augmented reality environment. Virtual Reality, 17, 293305. https://doi.org/10.1007/s10055-013-0230-0CrossRefGoogle Scholar
LG. (2020, February 1). Magic Remote. www.lg.com/us/magic-remoteGoogle Scholar
Li, , Y., Huang, J., Tian, F., Wang, H.-A., & Dai, G.-Z. (2019). Gesture interaction in virtual reality. Virtual Reality & Intelligent Hardware, 1(1), 84112 https://doi.org/10.3724/SP.J.2096-5796.2018.0006CrossRefGoogle Scholar
Loehmann, S., Knobel, M., Lamara, M., & Butz, A. (2013). Culturally independent gestures for in-car interactions. In Proceedings of Interact, Part III, LNCS. Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-642-40477-1_34Google Scholar
McAweeney, E., Zhang, H., & Nebeling, M. (2018). User-driven design principles for gesture representation. In Proceedings of CHI 2018. ACM. https://doi.org/10.1145/3173574.3174121Google Scholar
McNeill, D. (1992). Hand and mind: What gestures reveal about thought. Chicago, IL: University of Chicago Press.Google Scholar
Magrofuoco, N., & Vanderdonckt, J. (2019). Gelicit: A cloud platform for distributed gesture elicitation studies. In Proceedings of the ACM on Human-Computer Interaction, 3(EICS), 141. ACM. https://doi.org/10.1145/3331148CrossRefGoogle Scholar
Mann, S. (1998). Reconfigured self as basis for humanistic intelligence. In ATEC ‘98 Proceedings of the annual conference on USENIX Annual Technical Conference. Berkeley, CA: USENIX Association.Google Scholar
Mehler, A., vor der Brück, T., & Lücking, A. (2014). Comparing hand gesture vocabularies for HCI. In Kurosu, M. (Ed.), Human–Computer Interaction. Advanced Interaction Modalities and Techniques. HCI 2014. Lecture Notes in Computer Science, 8511(8192). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-319-07230-2_8CrossRefGoogle Scholar
Meier, A., Goto, K., & Wörmann, M. (2014). Thumbs up to gesture controls? A cross-cultural study on spontaneous gestures. In International Conference on Cross-Cultural Design, 211217. Cham: Springer. https://doi.org/10.1007/978-3-319-07308-8_21CrossRefGoogle Scholar
Meta 2. (2020, March 10). Metavision AR headset (discontinued). https://meta.reality.news/how-to/set-up-meta-2-head-mounted-display-0180826/Google Scholar
Moorthy, A. E., & Vu, K. P. L. (2015). Privacy concerns for use of voice activated personal assistant in the public space. International Journal of Human-Computer Interaction, 31(4), 307335. https://doi.org/10.1080/10447318.2014.986642CrossRefGoogle Scholar
Morris, M. R., Danielescu, A., Drucker, S., Fisher, D., Lee, B., & Wobbrock, J. O. (2014). Reducing legacy bias in gesture elicitation studies. Interactions, 21(3), 4045. https://doi.org/10.1145/2591689CrossRefGoogle Scholar
Movea. (2018). US Patent No. US 9,927,876 B2. Remote Control with 3D pointing and Gesture Recognition Capabilities. https://patentimages.storage.googleapis.com/f1/51/f8/d09f8212091955/US9927876.pdfGoogle Scholar
Müller, C. (2017). How recurrent gestures mean: Conventionalized contexts-of-use and embodied motivation. Gesture, 16(2), 277304. https://doi.org/10.1075/gest.16.2.05mulCrossRefGoogle Scholar
Myo. (2020, February 1). Myo Support. https://support.getmyo.com/Google Scholar
Nacenta, M. A., Kamber, Y., Qiang, Y., & Kristensson, P. O. (2013). Memorability of pre-designed and user-defined gesture sets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 10991108. ACM. https://doi.org/10.1145/2470654.2466142CrossRefGoogle Scholar
Nielsen, J. (1993). Usability engineering. San Francisco, CA: Morgan Kaufman Publishers.CrossRefGoogle Scholar
Nielsen, S., Nellemann, N., Larsen, L., & Stec, K. (2020). The social acceptability of peripheral interaction with 3D gestures in a simulated setting. In Kurosu, M. (Ed.), Human–computer interaction. Multimodal and natural interaction (pp. 7795). Cham, Switzerland: Springer. https://doi.org/10.1007/978-3-030-49062-1_5CrossRefGoogle Scholar
Norman, D. (2010). Natural user interfaces are not natural. ACM Interactions, 17(3), 610. https://doi.org/10.1145/1744161.1744163CrossRefGoogle Scholar
Oculus. (2020a, February 1). The Oculus Quest VR headset. https://www.oculus.com/quest/?locale=en_USGoogle Scholar
Oculus. (2020b, February 1). The Oculus Rift VR headset. https://www.oculus.com/rift/#oui-csl-rift-games=robo-recallGoogle Scholar
Panasonic. (2020, February 1). Kitchen of the Future at IFA 2018 Berlin. www.youtube.com/watch?v=OWvzDq8SFko (restricted access)Google Scholar
Piccolo. (2020, February 1). Piccolo Labs Open API. https://piccololabs.com/Google Scholar
Rekik, Y., Vatavu, R. D., & Grisoni, L. (2014). Understanding users’ perceived difficulty of multi-touch gesture articulation. In Proceedings of ICMI ’ 14, 232239. ACM. http://dx.doi.org/10.1145/2663204.2663273Google Scholar
Rico, J., & Brewster, S. (2009). Gestures all around us: User differences in social acceptability perceptions of gesture-based interfaces. In Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services, 64, 12. ACM. https://doi.org/10.1145/1613858.1613936Google Scholar
Rico, J., & Brewster, S. (2010a). Usable gestures for mobile interfaces: Evaluating social acceptability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 887896. ACM. https://doi.org/10.1145/1753326.1753458CrossRefGoogle Scholar
Rico, J., & Brewster, S. (2010b). Gesture and voice prototyping for early evaluations of social acceptability in multimodal interfaces. In International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction, 16, 19. ACM. https://doi.org/10.1145/1891903.1891925Google Scholar
Rose, D. (2018). Why gesture is the next big thing in design. IDEO Blog. https://www.ideo.com/blog/why-gesture-is-the-next-big-thing-in-designGoogle Scholar
Samsung. (2020, February 1). John Lewis & Partners: Samsung Gesture controlled TV. Youtube. https://youtu.be/xHFXE6R60VE?feature=sharedGoogle Scholar
Sevenhugs. (2020, February 1). Smart Remote X. https://sevenhugs.com/Google Scholar
Snyder, C. (2003). Paper prototyping: The fast and easy way to design and refine user interfaces. San Francisco, CA: Morgan Kaufmann Publishers.Google Scholar
Soli. (2020, February 1). Soli: Home. Google. https://atap.google.com/soli/Google Scholar
Stec, K., & Larsen, L. B. (2018). Gestures for controlling a moveable TV. In Proceedings of the 2018 ACM International Conference on Interactive Experiences for TV and Online Video, 514. ACM. https://doi.org/10.1145/3210825.3210831CrossRefGoogle Scholar
Switch. (2020, February 1). Nintendo Switch. https://www.nintendo.com/switch/Google Scholar
Taniberg, A., Botin, L., & Stec, K. (2018). Context of use affects the social acceptability of gesture interaction. In Proceedings of the 10th Nordic Conference on Human-Computer Interaction, 731735. ACM. https://doi.org/10.1145/3240167.3240250CrossRefGoogle Scholar
Tscharn, R., Löffler, D., Latoschik, M. E., & Hurtienne, J. (2017). ”Stop over there”: Natural gesture and speech interaction for non-critical spontaneous intervention in autonomous driving. In Proceedings of ICMI’17, November, 2017, 91100). ACM. https://doi.org/10.1145/3136755.3136787CrossRefGoogle Scholar
TwentyBN. (2020, February 1). twentybn. https://20bn.com/Google Scholar
Vatavu, R. D., & Wobbrock, J. O. (2016). Between-subjects elicitation studies: Formalization and tool support. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 33903402. ACM. https://doi.org/10.1145/2858036.2858228CrossRefGoogle Scholar
Visimote. (2020, February 1). Visimote natural remote control. https://visimote.comGoogle Scholar
Vive. (2020, February 1). HTC Vive VR gaming headset. www.vive.com/us/Google Scholar
Wensveen, S., Djajadiningrat, J., & Overbeeke, C. (2004). Interaction frogger: A design framework to couple action and function through feedback and feedforward. In Proceedings of the 5th Conference on Designing Interactie Systems: Processes, Practices, Methods, and Techniques, 177–184. Cambridge, MA. ACM. https://doi.org/10.1145/1013115.1013140Google Scholar
Wii. (2020, February 1). Nintendo Wii. http://wii.comGoogle Scholar
Wobbrock, J. O., Aung, H. H., Rothrock, B., & Myers, B. A. (2005). Maximizing the guessability of symbolic input. In CHI’05 Extended Abstracts on Human Factors in Computing Systems, 1869–1872). ACM. https://doi.org/10.1145/1056808.1057043CrossRefGoogle Scholar
Zaiţi, I. A., Pentiuc, Ş. G., & Vatavu, R. D. (2015). On free-hand TV control: Experimental results on user-elicited gestures with Leap Motion. Personal and Ubiquitous Computing, 19(5–6), 821838. https://doi.org/10.1007/s00779-015-0863-yCrossRefGoogle Scholar

Save book to Kindle

To save this book to your Kindle, first ensure coreplatform@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×