Human Perception of Aural and Visual Disparity in Virtual Environments

dc.contributor.authorWiker, Erik Danielen
dc.contributor.committeechairRoan, Michael J.en
dc.contributor.committeememberWicks, Alfred L.en
dc.contributor.committeememberTarazaga, Pablo Albertoen
dc.contributor.departmentMechanical Engineeringen
dc.date.accessioned2018-09-14T08:00:40Zen
dc.date.available2018-09-14T08:00:40Zen
dc.date.issued2018-09-13en
dc.description.abstractWith the development of technology over the past decade and the former challenges of virtual environments mitigated, the need for further study of human interaction with these environments has grown apparent. The visual and interaction components for virtual reality applications have been comprehensively studied, but a lack of spatial audio fidelity leaves a need for understanding how well humans can localize aural cues and discern audio-visual disparity in these virtual environments. In order for development of accurate and efficient levels of audio fidelity, a human study was conducted with 18 participants to see how far a bimodal audio and visual cue need to separate for someone to notice. As suspected, having a visual component paired with an auditory one led to biasing toward the visual component. The average participant noticed a disparity when the audio component was 33.7° apart from the visual one, pertaining to the azimuth. There was no significant evidence to suggest that speed or direction of audio component disparity led to better localization performance by participant. Presence and prior experience did not have an effect on localization performance; however, a larger participant base may be needed to draw further conclusions. Increase in localization ability was observed within a few practice rounds for participants. Overall, performance in virtual reality was parallel to augmented reality performance when a visual source biased sound localization, and can this be a both tool and design constraint for virtual environment developers.en
dc.description.abstractgeneralVirtual Reality has overcame a large technological gap over the past decade, allowing itself to be a strong tool in many applications from training to entertainment. The need for studying audio fidelity in virtual environments has emerged from a gap of research in the virtual reality domain. Therefore, a human participant study was conducted to see how well they could localize sound in a virtual environment. This involved signaling when they noticed a visual object and a sound split. The average of 72 trials with 18 participants was 33.7° of separation on the horizontal plane. This can be both a tool and a design constraint for virtual reality developers, who can use this visual biasing as a guideline for future applications.en
dc.description.degreeMaster of Scienceen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:17086en
dc.identifier.urihttp://hdl.handle.net/10919/85015en
dc.publisherVirginia Techen
dc.rightsIn Copyrighten
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/en
dc.subjectVirtual Realityen
dc.subjectSpatial Audioen
dc.subjectLocalizationen
dc.subjectAmbisonicsen
dc.titleHuman Perception of Aural and Visual Disparity in Virtual Environmentsen
dc.typeThesisen
thesis.degree.disciplineMechanical Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.levelmastersen
thesis.degree.nameMaster of Scienceen

Files

Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
Wiker_ED_T_2018.pdf
Size:
2.26 MB
Format:
Adobe Portable Document Format
Loading...
Thumbnail Image
Name:
Wiker_ED_T_2018_support_3.pdf
Size:
98.92 KB
Format:
Adobe Portable Document Format
Description:
Supporting documents

Collections