Concepedia

Publication | Closed Access

Interactive Sonification of the Spatial Behavior of Human and Synthetic Characters in a Mixed-Reality Environment

11

Citations

15

References

2007

Year

Abstract

It is generally admitted that music is a powerful carrier of emotions [4, 21], and that audition can play an important role in enhancing the sensation of presence in Virtual Environments [5, 22]. In mixed-reality environments and interactive multi-media systems such as Massively Multiplayer Online Games (MMORPG), the improvement of the user’s perception of immersion is crucial. Nonetheless, the sonification of those environments is often reduced to its simplest expression, namely a set of prerecorded sound tracks. Background music many times relies on repetitive, predetermined and somewhat predictable musical material. Hence, there is a need for a sonification scheme that can generate context sensitive, adaptive, rich and consistent music in real-time. In this paper we introduce a framework for the sonification of spatial behavior of multiple human and synthetic characters in a Mixed-Reality environment. Previously we have used RoBoser [1] to sonify different interactive installation including the interaction between humans and a large-scale accessible space called Ada [2] Here we are investigating the applicability of the RoBoser framework to the sonification of the continuous and dynamic interaction between individuals populating a mixed-reality space. We propose a semantic layer that maps sensor data into intuitive parameters for the control of music generation, and show that the musical events are directly influenced by the spatial behavior of human and synthetic characters in the space, thus creating a behavior-dependant sonification that enhance the user’s perception of immersion. 1.

References

YearCitations

Page 1