Hostname: page-component-cd9895bd7-hc48f Total loading time: 0 Render date: 2024-12-26T07:53:26.020Z Has data issue: false hasContentIssue false

Crafting the Language of Robotic Agents: A vision for electroacoustic music in human–robot interaction

Published online by Cambridge University Press:  19 September 2022

Frederic Anthony Robinson
Affiliation:
Creative Robotics Lab & Interactive Media Lab, University of New South Wales, Australia. Email: frederic.robinson@unsw.edu.au
Mari Velonaki
Affiliation:
Creative Robotics Lab, University of New South Wales, Australia. Email: mari.velonaki@unsw.edu.au
Oliver Bown
Affiliation:
Interactive Media Lab, University of New South Wales, Australia. Email: o.bown@unsw.edu.au

Abstract

This article discusses the role of electroacoustic music practice in the context of human–robot interaction (HRI), illustrated by the first author’s work creating the sonic language of interactive robotic artwork Diamandini. It starts with a discussion of the role of sound in social robotics and surveys various notable conceptual approaches to robot sound. The central thesis of the article is that electroacoustic music can provide a valuable source of aesthetic considerations and creative practice for the creation of richer and more refined sonic HRIs by giving practitioners new ways to create believable sounding objects, to convey fiction, agency and animacy, and to communicate causality in auditory feedback. To demonstrate this, the authors describe the rationale for treating robot sound design as a compositional process and discuss the implications of the endeavour’s non-linear and site-specific nature. These considerations are illustrated using sound examples and design decisions made throughout the creation process of the robotic artwork. The authors conclude with observations on how the compositional process is affected by this particular application context.

Type
Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Blackburn, M. 2011. The Visual Sound-Shapes of Spectromorphology: An Illustrative Guide to Composition. Organised Sound 16(1): 513.CrossRefGoogle Scholar
Breazeal, C. L. 2000. Sociable Machines: Expressive Social Exchange between Humans and Robots. PhD thesis, Massachusetts Institute of Technology.Google Scholar
Breazeal, C., Dautenhahn, K. and Kanda, T. 2016. Social Robotics. In Siciliano, B. and Khatib, O. (eds.) Springer Handbook of Robotics. Berlin: Springer International, 1935–72.Google Scholar
Bretan, M. and Weinberg, G. 2016. A Survey of Robotic Musicianship. Communications of the ACM 59(5): 100–9.CrossRefGoogle Scholar
Cha, E. and Mataric, M. 2016. Using Nonverbal Signals to Request Help during Human–robot Collaboration. 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 5070–6. https://doi.org/10.1109/IROS.2016.7759744 Google Scholar
Emmerson, S. 1986. The Relation of Language to Materials. In Emmerson, S. (ed.) The Language of Electroacoustic Music. London: Palgrave Macmillan, 1739.CrossRefGoogle Scholar
Gaver, W. 1986. Auditory Icons: Using Sound in Computer Interfaces. Human-Computer Interaction 2(2): 167–77. https://doi.org/10.1207/s15327051hci0202_3 Google Scholar
Hoffman, G. and Vanunu, K. 2013. Effects of Robotic Companionship on Music Enjoyment and Agent Perception. 2013 8th ACM/IEEE International Conference on Human–robot Interaction (HRI), 317–24. https://doi.org/10.1109/HRI.2013.6483605 Google Scholar
Hug, D. and Misdariis, N. 2011. Towards a Conceptual Framework to Integrate Designerly and Scientific Sound Design Methods. Proceedings of the 6th Audio Mostly Conference on A Conference on Interaction with Sound, 2330. https://doi.org/10.1145/2095667.2095671 Google Scholar
Komatsu, T., Kobayashi, K., Yamada, S., Funakoshi, K. and Nakano, M. 2018. Vibrational Artificial Subtle Expressions: Conveying System’s Confidence Level to Users by Means of Smartphone Vibration. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 19. https://doi.org/10.1145/3173574.3174052 Google Scholar
Kulezic-Wilson, D. 2008. Sound Design is the New Score. Music, Sound, and the Moving Image 2(2): 127–31. https://doi.org/10.3828/msmi.2.2.5 Google Scholar
Moore, D., Tennent, H., Martelaro, N. and Ju, W. 2017. Making Noise Intentional: A Study of Servo Sound Perception. Proceedings of the 2017 ACM/IEEE International Conference on Human–robot Interaction, 1221. https://doi.org/10.1145/2909824.3020238 Google Scholar
O’Callaghan, J. 2011. Soundscape Elements in the Music of Denis Smalley: Negotiating the Abstract and the Mimetic. Organised Sound 16(1): 5462.Google Scholar
Penny, S. 2000. Agents as Artworks and Agent Design as Artistic Practice. Human Cognition and Social Agent Technology, 19: 395414.CrossRefGoogle Scholar
Read, R. and Belpaeme, T. 2016. People Interpret Robotic Non-linguistic Utterances Categorically. International Journal of Social Robotics 8(1): 3150. https://doi.org/10.1007/s12369-015-0304-0 CrossRefGoogle Scholar
Rinaldo, K. E. 1998. Technology Recapitulates Phylogeny: Artificial Life Art. Leonardo 31: 371–6.CrossRefGoogle Scholar
Robinson, F. A. 2020. Audio Cells: A Spatial Audio Prototyping Environment for Human–robot Interaction. Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’20), 955–60. https://doi.org/10.1145/3374920.3374999 CrossRefGoogle Scholar
Robinson, F. A., Velonaki, M. and Bown, O. 2021. Smooth Operator: Tuning Robot Perception through Artificial Movement Sound. Proceedings of the 2021 ACM/IEEE International Conference on Human–robot Interaction, 5362. https://doi.org/10.1145/3434073.3444658 Google Scholar
Savery, R., Zahray, L. and Weinberg, G. 2020. Emotional Musical Prosody for the Enhancement of Trust in Robotic Arm Communication. ArXiv Preprint ArXiv:2009.09048. https://doi.org/10.48550/arXiv.2009.09048 CrossRefGoogle Scholar
Schaeffer, P. 1966. Traité des objets musicaux. Paris: Média Diffusion.Google Scholar
Schwenk, M. and Arras, K. O. 2014. R2-D2 Reloaded: A Flexible Sound Synthesis System for Sonic Human–robot Interaction Design. The 23rd IEEE International Symposium on Robot and Human Interactive Communication, 161–7. https://doi.org/10.1109/ROMAN.2014.6926247 Google Scholar
Semmerling, L., Peters, P. and Bijsterveld, K. 2018. Staging the Kinetic: How Music Automata Sensitise Audiences to Sound Art. Organised Sound 23(3): 235–45.CrossRefGoogle Scholar
Silvera-Tawil, D., Velonaki, M. and Rye, D. 2015. Human–robot Interaction with Humanoid Diamandini Using an Open Experimentation Method. 2015 24th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 425–30. https://doi.org/10.1109/ROMAN.2015.7333674 Google Scholar
Smalley, D. 1997. Spectromorphology: Explaining Sound-Shapes. Organised Sound 2(2): 107–26.CrossRefGoogle Scholar
Smalley, D. 2007. Space-Form and the Acousmatic Image. Organised Sound 12(1): 3558. https://doi.org/10.1017/S1355771807001665 Google Scholar
Thiessen, R., Rea, D. J., Garcha, D. S., Cheng, C. and Young, J. E. 2019. Infrasound for HRI: A Robot Using Low-Frequency Vibrations to Impact How People Perceive its Actions. 2019 14th ACM/IEEE International Conference on Human–robot Interaction (HRI), 1118. https://doi.org/10.1109/HRI.2019.8673172 Google Scholar
Verstijnen, I. M., van Leeuwen, C., Goldschmidt, G., Hamel, R. and Hennessey, J. 1998. Sketching and Creative Discovery. Design Studies 19(4): 519–46.Google Scholar
Vogel, P. 1979. Sound Wall [Sound Installation].Google Scholar
Wishart, T. 1986. Sound Symbols and Landscapes. In Emmerson, S. (ed.) The Language of Electroacoustic Music. London: Palgrave Macmillan, 4160.CrossRefGoogle Scholar
Woolf, S. and Bech, T. 2002. Experiments with Reactive Robotic Sound Sculptures. A Life VIII: Workshop Proceedings 2002 P2, 32.Google Scholar
Yilmazyildiz, S., Read, R., Belpeame, T. and Verhelst, W. 2016. Review of Semantic-Free Utterances in Social Human–Robot Interaction. International Journal of Human-Computer Interaction 32(1): 6385. https://doi.org/10.1080/10447318.2015.1093856 CrossRefGoogle Scholar

DISCOGRAPHY

Berezan, D. 2013. Buoy (2011). On Allusions Sonores. Montreal: Empreintes Digitales. IMED13122.Google Scholar
Harrison, J. 2007. Internal Combustion (2006). On Environs. Montreal: Empreintes Digitales. IMED0788.Google Scholar

Robinson et al. supplementary material

Robinson et al. supplementary material 1

Download Robinson et al. supplementary material(Audio)
Audio 61.6 KB

Robinson et al. supplementary material

Robinson et al. supplementary material 2

Download Robinson et al. supplementary material(Video)
Video 115.9 MB

Robinson et al. supplementary material

Robinson et al. supplementary material 3

Download Robinson et al. supplementary material(Video)
Video 10.4 MB