Browsing by Author "Joo, Woohun"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
- Experiential Graphic Sonification for Visual and Auditory Communication Design and Musical ExpressionJoo, Woohun (Virginia Tech, 2022-06-28)This dissertation explains two sonification platforms designed for image-sound association study and art, and the study results. A platform for user study was developed first and an artistic audiovisual platform was derived based on it. First, the five image-sound association studies were conducted to see whether people can successfully associate sounds and fundamental shapes (i.e., a circle, a triangle, a square, lines, curves, and other custom shapes) and the correct answer rate was high. Then, the same sonification platform was transformed by adding colors to the audiovisual platform for artistic/musical expression. A line-by-line sonification method and an object-oriented method were newly developed to sonify the background and the shapes separately. To enhance user experience, the sound of each shape was spatialized in a multi-layer speaker environment or a virtual listening environment.
- Introducing Locus: a NIME for Immersive Exocentric Aural EnvironmentsSardana, Disha; Joo, Woohun; Bukvic, Ivica Ico; Earle, Gregory D. (ACM, 2019-06)Locus is a NIME designed specifically for an interactive, immersive high density loudspeaker array environment. The system is based on a pointing mechanism to interact with a sound scene comprising 128 speakers. Users can point anywhere to interact with the system, and the spatial interaction utilizes motion capture, so it does not require a screen. Instead it is completely controlled via hand gestures using a glove that is populated with motion-tracking markers. The main purpose of this system is to offer intuitive physical interaction with the perimeter-based spatial sound sources. Further, its goal is to minimize user-worn technology and thereby enhance freedom of motion by utilizing environmental sensing devices, such as motion capture cameras or infrared sensors. The ensuing creativity enabling technology is applicable to a broad array of possible scenarios, from researching limits of human spatial hearing perception to facilitating learning and artistic performances, including dance. Below we describe our NIME design and implementation, its preliminary assessment, and offer a Unity-based toolkit to facilitate its broader deployment and adoption.
- New Interfaces for Spatial Musical ExpressionBukvic, Ivica Ico; Sardana, Disha; Joo, Woohun (ACM, 2020-07)With the proliferation of venues equipped with the high den- sity loudspeaker arrays there is a growing interest in developing new interfaces for spatial musical expression (NISME). Of particular interest are interfaces that focus on the emancipation of the spatial domain as the primary dimension for musical expression. Here we present Monet NISME that leverages multitouch pressure-sensitive surface and the D⁴ library’s spatial mask and thereby allows for a unique approach to interactive spatialization. Further, we present a study with 22 participants designed to assess its usefulness and compare it to the Locus, a NISME introduced in 2019 as part of a localization study which is built on the same design principles of using natural gestural interaction with the spatial content. Lastly, we briefly discuss the utilization of both NISMEs in two artistic performances and propose a set of guidelines for further exploration in the NISME domain.
- Studies In Spatial Aural Perception: Establishing Foundations For Immersive SonificationBukvic, Ivica Ico; Earle, Gregory D.; Sardana, Disha; Joo, Woohun (Georgia Institute of Technology, 2019-06)The Spatial Audio Data Immersive Experience (SADIE) project aims to identify new foundational relationships pertaining to human spatial aural perception, and to validate existing relationships. Our infrastructure consists of an intuitive interaction interface, an immersive exocentric sonification environment, and a layer-based amplitude-panning algorithm. Here we highlight the system’s unique capabilities and provide findings from an initial externally funded study that focuses on the assessment of human aural spatial perception capacity. When compared to the existing body of literature focusing on egocentric spatial perception, our data show that an immersive exocentric environment enhances spatial perception, and that the physical implementation using high density loudspeaker arrays enables significantly improved spatial perception accuracy relative to the egocentric and virtual binaural approaches. The preliminary observations suggest that human spatial aural perception capacity in real-world-like immersive exocentric environments that allow for head and body movement is significantly greater than in egocentric scenarios where head and body movement is restricted. Therefore, in the design of immersive auditory displays, the use of immersive exocentric environments is advised. Further, our data identify a significant gap between physical and virtual human spatial aural perception accuracy, which suggests that further development of virtual aural immersion may be necessary before such an approach may be seen as a viable alternative.