Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Human performance of computational sound models for immersive environments

Human performance of computational sound models for immersive environments This paper presents a method for incorporating the expressivity of human performance into real-time computational audio generation for games and other immersive environments. In film, Foley artistry is widely recognised to enrich the viewer's experience, but the creativity of the Foley artist cannot be easily transferred to interactive environments where sound cannot be recorded in advance. We present new methods for human performers to control computational audio models, using a model of a squeaky door as a case study. We focus on the process of selecting control parameters and on the mapping layer between gesture and sound, referring to results from a separate user evaluation study. By recording high-level control parameters rather than audio samples, performances can be later varied to suit the details of the interactive environment. INTRODUCTION Synthesised sound offers us a new way of thinking about sound design, and promises a much-needed solution to the complex problem of designing The New Soundtrack 4.2 (2014): 139­155 DOI: 10.3366/sound.2014.0059 # Edinburgh University Press and the Contributors www.euppublishing.com/SOUND KEYWORDS computational audio Foley performable sound model interaction immersive environments evaluation sound for dynamic interactive environments. However, the lack of human performance in the currently proposed design processes of computational http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png The New Soundtrack Edinburgh University Press

Human performance of computational sound models for immersive environments

Loading next page...
 
/lp/edinburgh-university-press/human-performance-of-computational-sound-models-for-immersive-UWxow733Ik
Publisher
Edinburgh University Press
Copyright
© Edinburgh University Press and the Contributors
Subject
Articles; Film, Media and Cultural Studies
ISSN
2042-8855
eISSN
2042-8863
DOI
10.3366/sound.2014.0059
Publisher site
See Article on Publisher Site

Abstract

This paper presents a method for incorporating the expressivity of human performance into real-time computational audio generation for games and other immersive environments. In film, Foley artistry is widely recognised to enrich the viewer's experience, but the creativity of the Foley artist cannot be easily transferred to interactive environments where sound cannot be recorded in advance. We present new methods for human performers to control computational audio models, using a model of a squeaky door as a case study. We focus on the process of selecting control parameters and on the mapping layer between gesture and sound, referring to results from a separate user evaluation study. By recording high-level control parameters rather than audio samples, performances can be later varied to suit the details of the interactive environment. INTRODUCTION Synthesised sound offers us a new way of thinking about sound design, and promises a much-needed solution to the complex problem of designing The New Soundtrack 4.2 (2014): 139­155 DOI: 10.3366/sound.2014.0059 # Edinburgh University Press and the Contributors www.euppublishing.com/SOUND KEYWORDS computational audio Foley performable sound model interaction immersive environments evaluation sound for dynamic interactive environments. However, the lack of human performance in the currently proposed design processes of computational

Journal

The New SoundtrackEdinburgh University Press

Published: Sep 1, 2014

There are no references for this article.