Browsing by Author "Bukvic, Ivica Ico"
Now showing 1 - 20 of 25
Results Per Page
Sort Options
- 3D Time-Based Aural Data Representation Using D⁴ Library’s Layer Based Amplitude Panning AlgorithmBukvic, Ivica Ico (Georgia Institute of Technology, 2016-07)The following paper introduces a new Layer Based Amplitude Panning algorithm and supporting D⁴ library of rapid prototyping tools for the 3D time-based data representation using sound. The algorithm is designed to scale and support a broad array of configurations, with particular focus on High Density Loudspeaker Arrays (HDLAs). The supporting rapid prototyping tools are designed to leverage oculocentric strategies to importing, editing, and rendering data, offering an array of innovative approaches to spatial data editing and representation through the use of sound in HDLA scenarios. The ensuing D⁴ ecosystem aims to address the shortcomings of existing approaches to spatial aural representation of data, offers unique opportunities for furthering research in the spatial data audification and sonification, as well as transportable and scalable spatial media creation and production.
- Action-Inspired Approach to Design of Navigation Techniques for Effective Spatial Learning in 3-D Virtual EnvironmentsKim, Ji Sun (Virginia Tech, 2013-05-07)Navigation in large spaces is essential in any environment (both the real world and the virtual world) because one of the human fundamental needs is to know the surrounding environment and to freely navigate within the environment. For successful navigation in large-scale virtual environments (VEs), accurate spatial knowledge is required, especially in training and learning application domains. By acquiring accurate spatial knowledge, people can effectively understand spatial layout and objects in environments. In addition, spatial knowledge acquired from a large- scale VE can effectively be transferred to the real world activities. Numerous navigation techniques have been proposed to support successful navigation and effective spatial knowledge acquisition in large-scale VEs. Among them, walking-like navigation techniques have been shown to support spatial knowledge acquisition more effectively in large-scale VEs, compared to non-body-based and non-walking-based navigation techniques. However, walking-like navigation techniques in large-scale VEs still have some issues, such as whole-body fatigue, large-controlled-space and specialized system configuration that make the walking-like navigation techniques less convenient, and consequently less commonly used. Due to these issues, convenient non-walking-like navigation techniques are preferred although they are less effective for spatial learning. While most research and development efforts are centered around walking- like navigation techniques, a fresh approach is needed to effectively and conveniently support for human spatial learning. We propose an action-inspired approach, to design convenient and effective navigation techniques for supporting people to acquire accurate spatial knowledge acquisition or improve spatial learning. The action-inspired approach is based on our insights from learning, neuropsychological and neurophysiological theories. The theories suggest that action and perception are closely related and core elements of learning. Our observations indicated that specific body-parts are not necessarily related to learning. We identified two types of action-inspired approach, body-turn based and action-transferred. Body- turn based approach keeps body-turn but replaces cyclic leg-movements of original walking action with more convenient control to resolve the issues presented from walking-like navigation techniques. Action-transferred approach addresses the design trade-offs between effectiveness and convenience, the core concept of which is grounded in the motor equivalence theory. We provided two navigation techniques, body-turn based and action-transferred based ones, and demonstrated the benefits of our approach by evaluating these two navigation techniques for spatial knowledge acquisition in several empirical studies. We also developed our own walking-like navigation technique, Sensor- Fusion Walking-in-Place (SF-WIP) because we needed a reference navigation technique for estimating the effect of the action-transferred navigation technique on spatial knowledge acquisition compared to that of a walking-like navigation technique. We performed empirical user studies and the experimental results showed that body-turn based navigation technique was more effective for survey knowledge acquisition in a large-scale virtual maze, compared to a wand-joystick based common navigation technique (JS, i.e., non-body-based and non-walking-like navigation technique). However, no significant difference was found for route knowledge acquisition while the SF-WIP was more effective than the JS for both route and survey knowledge acquisition. The results of the SF-WIP were compatible to the results from other studies (using walking-like navigation techniques). The action-transferred navigation technique, named Finger-Walking-in-Place (FWIP), was more effective for both route and survey knowledge acquisition than the JS in the same large-scale, large-extent and visually impoverished virtual maze. In addition, our empirical studies showed that the SF-WIP and the FWIP are similarly effective for route and survey knowledge acquisition, suggesting that human's spatial learning ability is still supported by the transferred action (FWIP) as much as the original action (SF-WIP). Since there was no significant difference between FWIP and SF-WIP but the FWIP showed the better effect than the JS on spatial knowledge acquisition, we can infer that our action-transferred approach is useful for designing convenient and effective navigation techniques for spatial learning. Some design implications are discussed, suggesting that our action-transferred approach is not limited to navigation techniques and can be extensively used to design (general) interaction techniques. In particular, action-transferred design can be more effectively used for the users with disabilities (unable to use of a part of the body) or for fatigue/convenience reasons. Related to our theoretical reasoning, we established another user study to explore if the transferred action is still coupled with the perception that is known as coupled with the original action. Our study results supported that there was a close connection between distance perception and transferred action as literature suggests. Thus, this dissertation successfully supports our theoretical observations and our action-inspired approach to design of convenient and effective navigation techniques for spatial learning through our empirical studies. Although our conclusion is drawn from the empirical studies using a couple of NavTechs (body-turn and FWIP), and is therefore not the direct evidence at the neural level, it should be notable that our action-inspired design approach for effective spatial learning is strongly supported by the theories that have been demonstrated by a number of studies over time.
- Aegis Audio Engine: Integrating a Real-Time Analog Signal Processing, Pattern Recognition, and a Procedural Soundtrack in a Live Twelve-Perfomer Spectacle With Crowd ParticipationBukvic, Ivica Ico; Matthews, Michael (Georgia Institute of Technology, 2015-07)In the following paper we present Aegis: a procedural networked soundtrack engine driven by real-time analog signal analysis and pattern recognition. Aegis was originally conceived as part of Drummer Game, a game-performancespectacle hybrid research project focusing on the depiction of a battle portrayed using terracotta soldiers. In it, each of the twelve cohorts—divided into two armies of six—are led by a drummer-performer who issues commands by accurately drumming precomposed rhythmic patterns on an original Chinese war drum. The ensuing spectacle is envisioned to also accommodate large audience participation whose input determines the morale of the two armies. An analog signal analyzer utilizes efficient pattern recognition to decipher the desired action and feed it both into the game and the soundtrack engine. The soundtrack engine then uses this action, as well as messages from the gaming simulation, to determine the most appropriate soundtrack parameters while ensuring minimal repetition and seamless transitions between various clips that account for tempo, meter, and key changes. The ensuing simulation offers a comprehensive system for pattern-driven input, holistic situation assessment, and a soundtrack engine that aims to generate a seamless musical experience without having to resort to cross-fades and other simplistic transitions that tend to disrupt a soundtrack’s continuity.
- Cinemacraft: Exploring Fidelity Cues in Collaborative Virtual World InteractionsNarayanan, Siddharth (Virginia Tech, 2018-02-15)The research presented in this thesis concerns the contribution of virtual human (or avatar) fidelity to social interaction in virtual environments (VEs) and how sensory fusion can improve these interactions. VEs present new possibilities for mediated communication by placing people in a shared 3D context. However, there are technical constraints in creating photo realistic and behaviorally realistic avatars capable of mimicking a person's actions or intentions in real time. At the same time, previous research findings indicate that virtual humans can elicit social responses even with minimal cues, suggesting that full realism may not be essential for effective social interaction. This research explores the impact of avatar behavioral realism on people's experience of interacting with virtual humans by varying the interaction fidelity. This is accomplished through the creation of Cinemacraft, a technology-mediated immersive platform for collaborative human-computer interaction in a virtual 3D world and the incorporation of sensory fusion to improve the fidelity of interactions and realtime collaboration. It investigates interaction techniques within the context of a multiplayer sandbox voxel game engine and proposes how interaction qualities of the shared virtual 3D space can be used to further involve a user as well as simultaneously offer a stimulating experience. The primary hypothesis of the study is that embodied interactions result in a higher degree of presence and co-presence, and that sensory fusion can improve the quality of presence and co-presence. The argument is developed through research justification, followed by a user-study to demonstrate the qualitative results and quantitative metrics.This research comprises of an experiment involving 24 participants. Experiment tasks focus on distinct but interrelated questions as higher levels of interaction fidelity are introduced.The outcome of this research is the generation of an interactive and accessible sensory fusion platform capable of delivering compelling live collaborative performances and empathetic musical storytelling that uses low fidelity avatars to successfully sidestep the 'uncanny valley'. This research contributes to the field of immersive collaborative interaction by making transparent the methodology, instruments and code. Further, it is presented in non-technical terminology making it accessible for developers aspiring to use interactive 3D media to pro-mote further experimentation and conceptual discussions, as well as team members with less technological expertise.
- Cinemacraft: Immersive Live Machinima as an Empathetic Musical Storytelling PlatformNarayanan, Siddhart; Bukvic, Ivica Ico (University of Michigan, 2016)In the following paper we present Cinemacraft, a technology-mediated immersive machinima platform for collaborative performance and musical human-computer interaction. To achieve this, Cinemacraft innovates upon a reverse-engineered version of Minecraft, offering a unique collection of live machinima production tools and a newly introduced Kinect HD module that allows for embodied interaction, including posture, arm movement, facial expressions, and a lip syncing based on captured voice input. The result is a malleable and accessible sensory fusion platform capable of delivering compelling live immersive and empathetic musical storytelling that through the use of low fidelity avatars also successfully sidesteps the uncanny valley.
- Controlling Scalability in Distributed Virtual EnvironmentsSingh, Hermanpreet (Virginia Tech, 2013-05-01)A Distributed Virtual Environment (DVE) system provides a shared virtual environment where physically separated users can interact and collaborate over a computer network. More simultaneous DVE users could result in intolerable system performance degradation. We address the three major challenges to improve DVE scalability: effective DVE system performance measurement, understanding the controlling factors of system performance/quality and determining the consequences of DVE system changes. We propose a DVE Scalability Engineering (DSE) process that addresses these three major challenges for DVE design. DSE allow us to identify, evaluate, and leverage trade-offs among DVE resources, the DVE software, and the virtual environment. DSE has three stages. First, we show how to simulate different numbers and types of users on DVE resources. Collected user study data is used to identify representative user types. Second, we describe a modeling method to discover the major trade-offs between quality of service and DVE resource usage. The method makes use of a new instrumentation tool called ppt. ppt collects atomic blocks of developer-selected instrumentation at high rates and saves it for offline analysis. Finally, we integrate our load simulation and modeling method into a single process to explore the effects of changes in DVE resources. We use the simple Asteroids DVE as a minimal case study to describe the DSE process. The larger and commercial Torque and Quake III DVE systems provide realistic case studies and demonstrate DSE usage. The Torque case study shows the impact of many users on a DVE system. We apply the DSE process to significantly enhance the Quality of Experience given the available DVE resources. The Quake III case study shows how to identify the DVE network needs and evaluate network characteristics when using a mobile phone platform. We analyze the trade-offs between power consumption and quality of service. The case studies demonstrate the applicability of DSE for discovering and leveraging tradeoffs between Quality of Experience and DVE resource usage. Each of the three stages can be used individually to improve DVE performance. The DSE process enables fast and effective DVE performance improvement.
- Design of a Wearable Two-Dimensional Joystick as a Muscle-Machine Interface Using Mechanomyographic SignalsSaha, Deba Pratim (Virginia Tech, 2013-11-12)Finger gesture recognition using glove-like interfaces are very accurate for sensing individual finger positions by employing a gamut of sensors. However, for the same reason, they are also very costly, cumbersome and unaesthetic for use in artistic scenarios such as gesture based music composition platforms like Virginia Tech's Linux Laptop Orchestra. Wearable computing has shown promising results in increasing portability as well as enhancing proprioceptive perception of the wearers' body. In this thesis, we present the proof-of-concept for designing a novel muscle-machine interface for interpreting human thumb motion as a 2-dimensional joystick employing mechanomyographic signals. Infrared camera based systems such as Microsoft Digits and ultrasound sensor based systems such as Chirp Microsystems' Chirp gesture recognizers are elegant solutions, but have line-of-sight sensing limitations. Here, we present a low-cost and wearable joystick designed as a wristband which captures muscle sounds, also called mechanomyographic signals. The interface learns from user's thumb gestures and finally interprets these motions as one of the four kinds of thumb movements. We obtained an overall classification accuracy of 81.5% for all motions and 90.5% on a modified metric. Results obtained from the user study indicate that mechanomyography based wearable thumb-joystick is a feasible design idea worthy of further study.
- Genesis of the Cube: The Design and Deployment of an HDLA-Based Performance and Research FacilityLyon, Eric; Caulkins, Terence; Blount, Denis; Bukvic, Ivica Ico; Nichols, Charles; Roan, Michael J.; Upthegrove, John Tanner (MIT, 2017)The Cube is a recently built facility that features a high-density loudspeaker array. The Cube is designed to support spatial computer music research and performance, art installations, immersive environments, scientific research, and all manner of experimental formats and projects. We recount here the design process, implementation, and initial projects undertaken in the Cube during the years 2013–2015.
- Introducing a K-12 Mechatronic NIME KitTsoukalas, Kyriakos D.; Bukvic, Ivica Ico (ACM, 2018-06)The following paper introduces a new mechatronic NIME kit that uses new additions to the Pd-L2Ork visual programing environment and its K-12 learning module. It is designed to facilitate the creation of simple mechatronics systems for physical sound production in K- 12 and production scenarios. The new set of objects builds on the existing support for the Raspberry Pi platform to also include the use of electric actuators via the microcomputer’s GPIO system. Moreover, we discuss implications of the newly introduced kit in the creative and K-12 education scenarios by sharing observations from a series of pilot workshops, with particular focus on using mechatronic NIMEs as a catalyst for the development of programing skills.
- Introducing D⁴: An Interactive 3D Audio Rapid Prototyping and Transportable Rendering Environment Using High Density Loudspeaker ArraysBukvic, Ivica Ico (University of Michigan, 2016)With a growing number of multimedia venues and research spaces equipped with High Density Loudspeaker Arrays, there is a need for an integrative 3D audio spatialization system that offers both a scalable spatialization algorithm and a battery of supporting rapid prototyping tools for time-based editing, rendering, and interactive low-latency manipulation. D⁴ library aims to assist this newfound whitespace by introducing a Layer Based Amplitude Panning algorithm and a collection of rapid prototyping tools for the 3D time-based audio spatialization and data sonification. The ensuing ecosystem is designed to be transportable and scalable. It supports a broad array of configurations, from monophonic to as many as hardware can handle. D⁴’s rapid prototyping tools leverage oculocentric strategies to importing and spatially rendering multidimensional data and offer an array of new approaches to time-based spatial parameter manipulation and representation. The following paper presents unique affordances of D⁴’s rapid prototyping tools.
- Introducing Locus: a NIME for Immersive Exocentric Aural EnvironmentsSardana, Disha; Joo, Woohun; Bukvic, Ivica Ico; Earle, Gregory D. (ACM, 2019-06)Locus is a NIME designed specifically for an interactive, immersive high density loudspeaker array environment. The system is based on a pointing mechanism to interact with a sound scene comprising 128 speakers. Users can point anywhere to interact with the system, and the spatial interaction utilizes motion capture, so it does not require a screen. Instead it is completely controlled via hand gestures using a glove that is populated with motion-tracking markers. The main purpose of this system is to offer intuitive physical interaction with the perimeter-based spatial sound sources. Further, its goal is to minimize user-worn technology and thereby enhance freedom of motion by utilizing environmental sensing devices, such as motion capture cameras or infrared sensors. The ensuing creativity enabling technology is applicable to a broad array of possible scenarios, from researching limits of human spatial hearing perception to facilitating learning and artistic performances, including dance. Below we describe our NIME design and implementation, its preliminary assessment, and offer a Unity-based toolkit to facilitate its broader deployment and adoption.
- Introduction to SonificationBukvic, Ivica Ico (Routledge, 2019-06-01)This chapter provides an introduction into the world of sonification, why it exists, and how we may benefit from it. It presents the material in a way that requires minimal prior knowledge and defines key terms necessary for the content comprehension. A series of common realworld scenarios are presented and revisited throughout the chapter to illustrate key traits and the underutilized sonification potential. They are followed by some of the common-sense strategies that have emerged from this nascent field of research. Further, the chapter explores the full potential of sonification to help us quantify the limits of human ability to perceive and interpret spatial audio streams while also sidestepping some of the key limitations of the current virtual approaches to spatializing sound.
- L2OrkMote: Reimagining a Low-Cost Wearable Controller for a Live Gesture-Centric Music PerformanceTsoukalas, Kyriakos D.; Kubalak, Joseph R.; Bukvic, Ivica Ico (ACM, 2018-06)Laptop orchestras create music, although digitally produced, in a collaborative live performance not unlike a traditional orchestra. The recent increase in interest and investment in this style of music creation has paved the way for novel methods for musicians to create and interact with music. To this end, a number of nontraditional instruments have been constructed that enable musicians to control sound production beyond pitch and volume, integrating filtering, musical effects, etc. Wii Remotes (WiiMotes) have seen heavy use in maker communities, including laptop orchestras, for their robust sensor array and low cost. The placement of sensors and the form factor of the device itself are suited for video games, not necessarily live music creation. In this paper, the authors present a new controller design, based on the WiiMote hardware platform, to address usability in gesture-centric music performance. Based on the pilot-study data, the new controller offers unrestricted two-hand gesture production, smaller footprint, and lower muscle strain.
- Lantern Field: Exploring Participatory Design of a Communal, Spatially Responsive InstallationBortz, Brennon; Ishida, Aki; Bukvic, Ivica Ico; Knapp, R. Benjamin (NIME, 2013-05)Lantern Field is a communal, site-specific installation that takes shape as a spatially responsive audio-visual field. The public participates in the creation of the installation, resulting in shared ownership of the work between both the artists and participants. Furthermore, the installation takes new shape in each realization, both to incorporate the constraints and affordances of each specific site, as well as to address the lessons learned from the previous iteration. This paper describes the development and execution of Lantern Field over its most recent version, with an eye toward the next iteration at the Smithsonian's Freer Gallery during the 2013 National Cherry Blossom Festival inWashington, D.C.
- The Making of Leork: The Virginia Tech Linux Laptop OrchestraBukvic, Ivica Ico; Matthews, Michael; Renfro, Maya; Wood, Andrew (2009)This poster describes the making of Leork, the Virginia Tech Linux Laptop Orchestra. The challenge of the project was to assemble a laptop orchestra using open source software, create self-constructed speakers, and obtain netbooks for a minimal cost. The project goals were to design and build a website, build hemispherical speakers, and construct software patches in Pure Data. At the project's conclusion, 16 hemispherical speakers were built, a website was developed and made available at 12ork.music.vt.edu, and all software patches were completed.
- New Interfaces for Spatial Musical ExpressionBukvic, Ivica Ico; Sardana, Disha; Joo, Woohun (ACM, 2020-07)With the proliferation of venues equipped with the high den- sity loudspeaker arrays there is a growing interest in developing new interfaces for spatial musical expression (NISME). Of particular interest are interfaces that focus on the emancipation of the spatial domain as the primary dimension for musical expression. Here we present Monet NISME that leverages multitouch pressure-sensitive surface and the D⁴ library’s spatial mask and thereby allows for a unique approach to interactive spatialization. Further, we present a study with 22 participants designed to assess its usefulness and compare it to the Locus, a NISME introduced in 2019 as part of a localization study which is built on the same design principles of using natural gestural interaction with the spatial content. Lastly, we briefly discuss the utilization of both NISMEs in two artistic performances and propose a set of guidelines for further exploration in the NISME domain.
- NIMEhub: Toward a Repository for Sharing and Archiving Instrument DesignsMcPherson, Andrew P.; Berdahl, Edgar; Lyons, Michael J.; Jensensius, Alexander Refsum; Bukvic, Ivica Ico; Knudson, Arve (ACM, 2016-07)This workshop will explore the potential creation of a community database of digital musical instrument (DMI) designs. In other research communities, reproducible research practices are common, including open-source software, open datasets, established evaluation methods and community standards for research practice. NIME could benefit from similar practices, both to share ideas amongst geographically distant researchers and to maintain instrument designs after their first performances. However, the needs of NIME are different from other communities on account of NIME's reliance on custom hardware designs and the interdependence of technology and arts practice. This half-day workshop will promote a community discussion of the potential benefits and challenges of a DMI repository and plan concrete steps toward its implementation.
- On Affective States in Computational Cognitive Practice through Visual and Musical ModalitiesTsoukalas, Kyriakos (Virginia Tech, 2021-06-29)Learners' affective states correlate with learning outcomes. A key aspect of instructional design is the choice of modalities by which learners interact with instructional content. The existing literature focuses on quantifying learning outcomes without quantifying learners' affective states during instructional activities. An investigation of how learners feel during instructional activities will inform the instructional systems design methodology of a method for quantifying the effects of individually available modalities on learners' affect. The objective of this dissertation is to investigate the relationship between affective states and learning modalities of instructional computing. During an instructional activity, learners' enjoyment, excitement, and motivation are measured before and after a computing activity offered in three distinct modalities. The modalities concentrate on visual and musical computing for the practice of computational thinking. An affective model for the practice of computational thinking through musical expression was developed and validated. This dissertation begins with a literature review of relevant theories on embodied cognition, learning, and affective states. It continues with designing and fabricating a prototype instructional apparatus and its virtual simulation as a web service, both for the practice of computational thinking through musical expression, and concludes with a study investigating participants' affective states before and after four distinct online computing activities. This dissertation builds on and contributes to extant literature by validating an affective model for computational thinking practice through self-expression. It also proposes a nomological network for the construct of computational thinking for future exploration of the construct, and develops a method for the assessment of instructional activities based on predefined levels of skill and knowledge.
- OPEN (at the) SOURCE: Luminescent Forest and CloudKnapp, Benjamin; Zacharias, Kari (Virginia Tech. Moss Arts Center, 2015-04-23)Two interactive explorations of light created by four ICAT faculty fellows that transcend discipline boundaries to inspire new kinds of experiences. These ideas of communication and revealing are wonderful emergent properties created through the integral coupling of aesthetic and technological innovation at the nexus of science, engineering, art, and design.
- OPERAcraft: Blurring the Lines between Real and VirtualBukvic, Ivica Ico; Cahoon, Cody; Wyatt, Ariana; Cowden, Tracy; Dredger, Katie (University of Michigan, 2014-09)In the following paper we present an innovative approach to coupling gaming, telematics, machinima, and opera to produce a hybrid performance art form and an arts+technology education platform. To achieve this, we leverage a custom Minecraft video game and sandbox mod and pd-l2ork real-time digital signal processing environment. The result is a malleable telematic-ready platform capable of supporting a broad array of artistic forms beyond its original intent, including theatre, cinema, as well as machinima and other experimental genres.