Scholarly Works, Center for Human-Computer Interaction (CHCI)
Permanent URI for this collection
Research articles, presentations, and other scholarship
Browse
Browsing Scholarly Works, Center for Human-Computer Interaction (CHCI) by Title
Now showing 1 - 20 of 31
Results Per Page
Sort Options
- 3D Sketching and Flexible Input for Surface Design: A Case StudyLeal, Anamary; Bowman, Douglas A. (Brazilian Computing Society (SBC), 2014)Designing three-dimensional (3D) surfaces is difficult in both the physical world and in 3D modeling software, requiring background knowledge and skill. The goal of this work is to make 3D surface design easier and more accessible through natural and tangible 3D interaction, taking advantage of users' proprioceptive senses to help them understand 3D position, orientation, size, and shape. We hypothesize that flexible input based on fabric may be suitable for 3D surface design, because it can be molded and folded into a desired shape, and because it can be used as a dynamic flexible brush for 3D sketching. Fabric3D, an interactive surface design system based on 3D sketching with flexible input, explored this hypothesis. Through a longitudinal five-part study in which three domain experts used Fabric3D, we gained insight into the use of flexible input and 3D sketching for surface design in various domains.
- 3D Time-Based Aural Data Representation Using D⁴ Library’s Layer Based Amplitude Panning AlgorithmBukvic, Ivica Ico (Georgia Institute of Technology, 2016-07)The following paper introduces a new Layer Based Amplitude Panning algorithm and supporting D⁴ library of rapid prototyping tools for the 3D time-based data representation using sound. The algorithm is designed to scale and support a broad array of configurations, with particular focus on High Density Loudspeaker Arrays (HDLAs). The supporting rapid prototyping tools are designed to leverage oculocentric strategies to importing, editing, and rendering data, offering an array of innovative approaches to spatial data editing and representation through the use of sound in HDLA scenarios. The ensuing D⁴ ecosystem aims to address the shortcomings of existing approaches to spatial aural representation of data, offers unique opportunities for furthering research in the spatial data audification and sonification, as well as transportable and scalable spatial media creation and production.
- Aegis Audio Engine: Integrating a Real-Time Analog Signal Processing, Pattern Recognition, and a Procedural Soundtrack in a Live Twelve-Perfomer Spectacle With Crowd ParticipationBukvic, Ivica Ico; Matthews, Michael (Georgia Institute of Technology, 2015-07)In the following paper we present Aegis: a procedural networked soundtrack engine driven by real-time analog signal analysis and pattern recognition. Aegis was originally conceived as part of Drummer Game, a game-performancespectacle hybrid research project focusing on the depiction of a battle portrayed using terracotta soldiers. In it, each of the twelve cohorts—divided into two armies of six—are led by a drummer-performer who issues commands by accurately drumming precomposed rhythmic patterns on an original Chinese war drum. The ensuing spectacle is envisioned to also accommodate large audience participation whose input determines the morale of the two armies. An analog signal analyzer utilizes efficient pattern recognition to decipher the desired action and feed it both into the game and the soundtrack engine. The soundtrack engine then uses this action, as well as messages from the gaming simulation, to determine the most appropriate soundtrack parameters while ensuring minimal repetition and seamless transitions between various clips that account for tempo, meter, and key changes. The ensuing simulation offers a comprehensive system for pattern-driven input, holistic situation assessment, and a soundtrack engine that aims to generate a seamless musical experience without having to resort to cross-fades and other simplistic transitions that tend to disrupt a soundtrack’s continuity.
- Bare-hand volume cracker for raw volume data analysisSocha, John J.; Laha, Bireswar; Bowman, Douglas A. (2016-09-28)Analysis of raw volume data generated from different scanning technologies faces a variety of challenges, related to search, pattern recognition, spatial understanding, quantitative estimation, and shape description. In a previous study, we found that the volume cracker (VC) 3D interaction (3DI) technique mitigated some of these problems, but this result was from a tethered glove-based system with users analyzing simulated data. Here, we redesigned the VC by using untethered bare-hand interaction with real volume datasets, with a broader aim of adoption of this technique in research labs. We developed symmetric and asymmetric interfaces for the bare-hand VC (BHVC) through design iterations with a biomechanics scientist. We evaluated our asymmetric BHVC technique against standard 2D and widely used 3DI techniques with experts analyzing scanned beetle datasets. We found that our BHVC design significantly outperformed the other two techniques. This study contributes a practical 3DI design for scientists, documents lessons learned while redesigning for bare-hand trackers and provides evidence suggesting that 3DI could improve volume data analysis for a variety of visual analysis tasks. Our contribution is in the realm of 3D user interfaces tightly integrated with visualization for improving the effectiveness of visual analysis of volume datasets. Based on our experience, we also provide some insights into hardware-agnostic principles for design of effective interaction techniques.
- Cinemacraft: Immersive Live Machinima as an Empathetic Musical Storytelling PlatformNarayanan, Siddhart; Bukvic, Ivica Ico (University of Michigan, 2016)In the following paper we present Cinemacraft, a technology-mediated immersive machinima platform for collaborative performance and musical human-computer interaction. To achieve this, Cinemacraft innovates upon a reverse-engineered version of Minecraft, offering a unique collection of live machinima production tools and a newly introduced Kinect HD module that allows for embodied interaction, including posture, arm movement, facial expressions, and a lip syncing based on captured voice input. The result is a malleable and accessible sensory fusion platform capable of delivering compelling live immersive and empathetic musical storytelling that through the use of low fidelity avatars also successfully sidesteps the uncanny valley.
- Consistency of Sedentary Behavior Patterns among Office Workers with Long-Term Access to Sit-Stand WorkstationsHuysmans, Maaike A.; Srinivasan, Divya; Mathiassen, Svend Erik (Oxford University Press, 2019-04-22)Introduction: Sit-stand workstations are a popular intervention to reduce sedentary behavior (SB) in office settings. However, the extent and distribution of SB in office workers long-term accustomed to using sit-stand workstations as a natural part of their work environment are largely unknown. In the present study, we aimed to describe patterns of SB in office workers with long-term access to sit-stand workstations and to determine the extent to which these patterns vary between days and workers. Methods: SB was objectively monitored using thigh-worn accelerometers for a full week in 24 office workers who had been equipped with a sit-stand workstation for at least 10 months. A comprehensive set of variables describing SB was calculated for each workday and worker, and distributions of these variables between days and workers were examined. Results: On average, workers spent 68% work time sitting [standard deviation (SD) between workers and between days (within worker): 10.4 and 18.2%]; workers changed from sitting to standing/ walking 3.2 times per hour (SDs 0.6 and 1.2 h−1); with bouts of sitting being 14.9 min long (SDs 4.2 and 8.5 min). About one-third of the workers spent >75% of their workday sitting. Between-workers variability was significantly different from zero only for percent work time sitting, while betweendays (within-worker) variability was substantial for all SB variables. Conclusions: Office workers accustomed to using sit-stand workstations showed homogeneous patterns of SB when averaged across several days, except for percent work time seated. However, SB differed substantially between days for any individual worker. The finding that many workers were extensively sedentary suggests that just access to sit-stand workstations may not be a sufficient remedy against SB; additional personalized interventions reinforcing use may be needed. To this end, differences in SB between days should be acknowledged as a potentially valuable source of variation.
- The Effects of Incorrect Occlusion Cues on the Understanding of Barehanded Referencing in Collaborative Augmented RealityLi, Yuan; Hu, Donghan; Wang, Boyuan; Bowman, Douglas A.; Lee, Sang Won (Frontiers, 2021-07-01)In many collaborative tasks, the need for joint attention arises when one of the users wants to guide others to a specific location or target in space. If the collaborators are co-located and the target position is in close range, it is almost instinctual for users to refer to the target location by pointing with their bare hands. While such pointing gestures can be efficient and effective in real life, performance will be impacted if the target is in augmented reality (AR), where depth cues like occlusion may be missing if the pointer’s hand is not tracked and modeled in 3D. In this paper, we present a study utilizing head-worn AR displays to examine the effects of incorrect occlusion cues on spatial target identification in a collaborative barehanded referencing task. We found that participants’ performance in AR was reduced compared to a real-world condition, but also that they developed new strategies to cope with the limitations of AR. Our work also identified mixed results of the effect of spatial relationships between users.
- Exploring Effect of Level of Storytelling Richness on Science Learning in Interactive and Immersive Virtual RealityZhang, Lei; Bowman, Douglas A. (ACM, 2022-06-21)Immersive and interactive storytelling in virtual reality (VR) is an emerging creative practice that has been thriving in recent years. Educational applications using immersive VR storytelling to explain complex science concepts have very promising pedagogical benefts because on the one hand, storytelling breaks down the complexity of science concepts by bridging them to people’s everyday experiences and familiar cognitive models, and on the other hand, the learning process is further reinforced through rich interactivity aforded by the VR experiences. However, it is unclear how diferent amounts of storytelling in an interactive VR storytelling experience may afect learning outcomes due to a paucity of literature on educational VR storytelling research. This preliminary study aims to add to the literature through an exploration of variations in the designs of essential storytelling elements in educational VR storytelling experiences and their impact on the learning of complex immunology concepts.
- Exploring Spatial UI Transition Mechanisms with Head-Worn Augmented RealityLu, Feiyu; Xu, Yan (ACM, 2022-04-29)Imagine in the future people comfortably wear augmented reality (AR) displays all day, how do we design interfaces that adapt to the contextual changes as people move around? In current operating systems, the majority of AR content defaults to staying at a fxed location until being manually moved by the users. However, this approach puts the burden of user interface (UI) transition solely on users. In this paper, we frst ran a bodystorming design workshop to capture the limitations of existing manual UI transition approaches in spatially diverse tasks. Then we addressed these limitations by designing and evaluating three UI transition mechanisms with different levels of automation and controllability (low-efort manual, semi-automated, fully-automated). Furthermore, we simulated imperfect contextual awareness by introducing prediction errors with diferent costs to correct them. Our results provide valuable lessons about the trade-ofs between UI automation levels, controllability, user agency, and the impact of prediction errors.
- Force Push: Exploring Expressive Gesture-to-Force Mappings for Remote Object Manipulation in Virtual RealityYu, Run; Bowman, Douglas A. (Frontiers Media, 2018-09-28)This paper presents Force Push, a novel gesture-based interaction technique for remote object manipulation in virtual reality (VR). Inspired by the design of magic powers in popular culture, Force Push uses intuitive hand gestures to drive physics-based movement of the object. Using a novel algorithm that dynamically maps rich features of hand gestures to the properties of the physics simulation, both coarse-grained ballisticmovements and fine-grained refinementmovements can be achieved seamlessly and naturally. An initial user study of a limited translation task showed that, although its gesture-to-force mapping is inherently harder to control than traditional position-to-position mappings, Force Push is usable even for extremely difficult tasks. Direct position-to-position control outperformed Force Push when the initial distance between the object and the target was close relative to the required accuracy; however, the gesture-based method began to show promising results when they were far away from each other. As for subjective user experience, Force Push was perceived as more natural and fun to use, even though its controllability and accuracy were thought to be inferior to direct control. This paper expands the design space of object manipulation beyond mimicking reality, and provides hints on using magical gestures and physics-based techniques for higher usability and hedonic qualities in user experience.
- Here’s What I’ve Learned: Asking Questions that Reveal Reward LearningHabibian, Soheil; Jonnavittula, Ananth; Losey, Dylan P. (Virginia Tech, 2021-07-02)Robots can learn from humans by asking questions. In these questions the robot demonstrates a few different behaviors and asks the human for their favorite. But how should robots choose which questions to ask? Today’s robots optimize for informative questions that actively probe the human’s preferences as efficiently as possible. But while informative questions make sense from the robot’s perspective, human onlookers often find them arbitrary and misleading. For example, consider an assistive robot learning to put away the dishes. Based on your answers to previous questions this robot knows where it should stack each dish; however, the robot is unsure about right height to carry these dishes. A robot optimizing only for informative questions focuses purely on this height: it shows trajectories that carry the plates near or far from the table, regardless of whether or not they stack the dishes correctly. As a result, when we see this question, we mistakenly think that the robot is still confused about where to stack the dishes! In this paper we formalize active preference-based learning from the human’s perspective. We hypothesize that — from the human’s point-of-view — the robot’s questions reveal what the robot has and has not learned. Our insight enables robots to use questions to make their learning process transparent to the human operator.We develop and test a model that robots can leverage to relate the questions they ask to the information these questions reveal. We then introduce a trade-off between informative and revealing questions that considers both human and robot perspectives: a robot that optimizes for this trade-off actively gathers information from the human while simultaneously keeping the human up to date with what it has learned. We evaluate our approach across simulations, online surveys, and in-person user studies. We find that robots which consider the human’s point of view learn just as quickly as state-of-the-art baselines while also communicating what they have learned to the human operator. Videos of our user studies and results are available here: https://youtu.be/tC6y_jHN7Vw.
- I Know What You Meant: Learning Human Objectives by (Under)estimating Their Choice SetJonnavittula, Ananth; Losey, Dylan P. (Virginia Tech, 2021-04-05)Assistive robots have the potential to help people perform everyday tasks. However, these robots first need to learn what it is their user wants them to do. Teaching assistive robots is hard for inexperienced users, elderly users, and users living with physical disabilities, since often these individuals are unable to show the robot their desired behavior. We know that inclusive learners should give human teachers credit for what they cannot demonstrate. But today’s robots do the opposite: they assume every user is capable of providing any demonstration. As a result, these robots learn to mimic the demonstrated behavior, even when that behavior is not what the human really meant! Here we propose a different approach to reward learning: robots that reason about the user’s demonstrations in the context of similar or simpler alternatives. Unlike prior works — which err towards overestimating the human’s capabilities — here we err towards underestimating what the human can input (i.e., their choice set). Our theoretical analysis proves that underestimating the human’s choice set is risk-averse, with better worst-case performance than overestimating. We formalize three properties to generate similar and simpler alternatives. Across simulations and a user study, our resulting algorithm better extrapolates the human’s objective. See the user study here: https://youtu.be/RgbH2YULVRo.
- Immersive Analytics: Theory and Research AgendaSkarbez, Richard; Polys, Nicholas F.; Ogle, J. Todd; North, Christopher L.; Bowman, Douglas A. (Frontiers, 2019-09-10)Advances in a variety of computing fields, including "big data," machine learning, visualization, and augmented/mixed/virtual reality, have combined to give rise to the emerging field of immersive analytics, which investigates how these new technologies support analysis and decision making. Thus far, we feel that immersive analytics research has been somewhat ad hoc, possibly owing to the fact that there is not yet an organizing framework for immersive analytics research. In this paper, we address this lack by proposing a definition for immersive analytics and identifying some general research areas and specific research questions that will be important for the development of this field. We also present three case studies that, while all being examples of what we would consider immersive analytics, present different challenges, and opportunities. These serve to demonstrate the breadth of immersive analytics and illustrate how the framework proposed in this paper applies to practical research.
- Introducing a K-12 Mechatronic NIME KitTsoukalas, Kyriakos D.; Bukvic, Ivica Ico (ACM, 2018-06)The following paper introduces a new mechatronic NIME kit that uses new additions to the Pd-L2Ork visual programing environment and its K-12 learning module. It is designed to facilitate the creation of simple mechatronics systems for physical sound production in K- 12 and production scenarios. The new set of objects builds on the existing support for the Raspberry Pi platform to also include the use of electric actuators via the microcomputer’s GPIO system. Moreover, we discuss implications of the newly introduced kit in the creative and K-12 education scenarios by sharing observations from a series of pilot workshops, with particular focus on using mechatronic NIMEs as a catalyst for the development of programing skills.
- Introducing D⁴: An Interactive 3D Audio Rapid Prototyping and Transportable Rendering Environment Using High Density Loudspeaker ArraysBukvic, Ivica Ico (University of Michigan, 2016)With a growing number of multimedia venues and research spaces equipped with High Density Loudspeaker Arrays, there is a need for an integrative 3D audio spatialization system that offers both a scalable spatialization algorithm and a battery of supporting rapid prototyping tools for time-based editing, rendering, and interactive low-latency manipulation. D⁴ library aims to assist this newfound whitespace by introducing a Layer Based Amplitude Panning algorithm and a collection of rapid prototyping tools for the 3D time-based audio spatialization and data sonification. The ensuing ecosystem is designed to be transportable and scalable. It supports a broad array of configurations, from monophonic to as many as hardware can handle. D⁴’s rapid prototyping tools leverage oculocentric strategies to importing and spatially rendering multidimensional data and offer an array of new approaches to time-based spatial parameter manipulation and representation. The following paper presents unique affordances of D⁴’s rapid prototyping tools.
- Introducing Locus: a NIME for Immersive Exocentric Aural EnvironmentsSardana, Disha; Joo, Woohun; Bukvic, Ivica Ico; Earle, Gregory D. (ACM, 2019-06)Locus is a NIME designed specifically for an interactive, immersive high density loudspeaker array environment. The system is based on a pointing mechanism to interact with a sound scene comprising 128 speakers. Users can point anywhere to interact with the system, and the spatial interaction utilizes motion capture, so it does not require a screen. Instead it is completely controlled via hand gestures using a glove that is populated with motion-tracking markers. The main purpose of this system is to offer intuitive physical interaction with the perimeter-based spatial sound sources. Further, its goal is to minimize user-worn technology and thereby enhance freedom of motion by utilizing environmental sensing devices, such as motion capture cameras or infrared sensors. The ensuing creativity enabling technology is applicable to a broad array of possible scenarios, from researching limits of human spatial hearing perception to facilitating learning and artistic performances, including dance. Below we describe our NIME design and implementation, its preliminary assessment, and offer a Unity-based toolkit to facilitate its broader deployment and adoption.
- L2OrkMote: Reimagining a Low-Cost Wearable Controller for a Live Gesture-Centric Music PerformanceTsoukalas, Kyriakos D.; Kubalak, Joseph R.; Bukvic, Ivica Ico (ACM, 2018-06)Laptop orchestras create music, although digitally produced, in a collaborative live performance not unlike a traditional orchestra. The recent increase in interest and investment in this style of music creation has paved the way for novel methods for musicians to create and interact with music. To this end, a number of nontraditional instruments have been constructed that enable musicians to control sound production beyond pitch and volume, integrating filtering, musical effects, etc. Wii Remotes (WiiMotes) have seen heavy use in maker communities, including laptop orchestras, for their robust sensor array and low cost. The placement of sensors and the form factor of the device itself are suited for video games, not necessarily live music creation. In this paper, the authors present a new controller design, based on the WiiMote hardware platform, to address usability in gesture-centric music performance. Based on the pilot-study data, the new controller offers unrestricted two-hand gesture production, smaller footprint, and lower muscle strain.
- Learning to Share Autonomy Across Repeated InteractionJonnavittula, Ananth; Losey, Dylan P. (Virginia Tech, 2021-07-20)Wheelchair-mounted robotic arms (and other assistive robots) should help their users perform everyday tasks. One way robots can provide this assistance is shared autonomy. Within shared autonomy, both the human and robot maintain control over the robot’s motion: as the robot becomes confident it understands what the human wants, it increasingly intervenes to automate the task. But how does the robot know what tasks the human may want to perform in the first place? Today’s shared autonomy approaches often rely on prior knowledge: for example, the robot must know the set of possible human goals a priori. In the long-term, however, this prior knowledge will inevitably break down — sooner or later the human will reach for a goal that the robot did not expect. In this paper we propose a learning approach to shared autonomy that takes advantage of repeated interactions. Learning to assist humans would be impossible if they performed completely different tasks at every interaction: but our insight is that users living with physical disabilities repeat important tasks on a daily basis (e.g., opening the fridge, making coffee, and having dinner). We introduce an algorithm that exploits these repeated interactions to recognize the human’s task, replicate similar demonstrations, and return control when unsure. As the human repeatedly works with this robot, our approach continually learns to assist tasks that were never specified beforehand: these tasks include both discrete goals (e.g., reaching a cup) and continuous skills (e.g., opening a drawer). Across simulations and an in-person user study, we demonstrate that robots leveraging our approach match existing shared autonomy methods for known goals, and outperform imitation learning baselines on new tasks. See videos here: https://youtu.be/NazeLVbQ2og.
- Learning When Less is More: “Bootstrapping” Undergraduate Programmers as Coordination DesignersLin, Strong; Tatar, Deborah; Harrison, Steve; Roschelle, Jeremy; Patton, Charles (Computer Professionals for Social Responsibility, 2006)In this paper, we describe an undergraduate computer science class in the United States that we started with the intention of creating a participatory design experience to create distributed mobile collaborative technologies for education. The case highlights the ways in which programmer understanding of an innovative new technology can depend on understanding the context of use. The students were to use Tuple-spaces, a language for coordination. However, it soon became clear that while the coordination of machines may be thought of as a computer science problem, the students could not understand the technical system without richer models of how, why, or when coordination is desirable. We were in the ironic position of teaching human coordination at the same time as describing the technical properties of a system to support it. To “bootstrap” the learning process, we asked the students to draw on their own coordination expertise by implementing familiar coordinative games. We propose games as an addition to the PD toolkit when implementers need help in stepping outside their everyday mindset.
- Move the Object or Move Myself? Walking vs. Manipulation for the Examination of 3D Scientific DataLages, Wallace S.; Bowman, Douglas A. (Frontiers, 2018-07-10)Physical walking is consistently considered a natural and intuitive way to acquire viewpoints in a virtual environment. However, research findings also show that walking requires cognitive resources. To understand how this tradeoff affects the interaction design for virtual environments; we evaluated the performance of 32 participants, ranging from 18 to 44 years old, in a demanding visual and spatial task. Participants wearing a virtual reality (VR) headset counted features in a complex 3D structure while walking or while using a 3D interaction technique for manipulation. Our results indicate that the relative performance of the interfaces depends on the spatial ability and game experience of the participants. Participants with previous game experience but low spatial ability performed better using the manipulation technique. However, walking enabled higher performance for participants with low spatial ability and without significant game experience. These findings suggest that the optimal design choices for demanding visual tasks in VR should consider both controller experience and the spatial ability of the target users.