Institute for Creativity, Arts, and Technology (ICAT)
Permanent URI for this community
The Institute for Creativity, Arts, and Technology is uniquely partnered with the Center for the Arts at Virginia Tech. By forging a pathway between trans-disciplinary research and art, educational innovation, and scientific and commercial discovery, the institute works to foster the creative process to create new possibilities for exploration and expression through learning, discovery, and engagement.
Browse
Browsing Institute for Creativity, Arts, and Technology (ICAT) by Content Type "Article - Refereed"
Now showing 1 - 20 of 22
Results Per Page
Sort Options
- 3D Sketching and Flexible Input for Surface Design: A Case StudyLeal, Anamary; Bowman, Douglas A. (Brazilian Computing Society (SBC), 2014)Designing three-dimensional (3D) surfaces is difficult in both the physical world and in 3D modeling software, requiring background knowledge and skill. The goal of this work is to make 3D surface design easier and more accessible through natural and tangible 3D interaction, taking advantage of users' proprioceptive senses to help them understand 3D position, orientation, size, and shape. We hypothesize that flexible input based on fabric may be suitable for 3D surface design, because it can be molded and folded into a desired shape, and because it can be used as a dynamic flexible brush for 3D sketching. Fabric3D, an interactive surface design system based on 3D sketching with flexible input, explored this hypothesis. Through a longitudinal five-part study in which three domain experts used Fabric3D, we gained insight into the use of flexible input and 3D sketching for surface design in various domains.
- Bare-hand volume cracker for raw volume data analysisSocha, John J.; Laha, Bireswar; Bowman, Douglas A. (2016-09-28)Analysis of raw volume data generated from different scanning technologies faces a variety of challenges, related to search, pattern recognition, spatial understanding, quantitative estimation, and shape description. In a previous study, we found that the volume cracker (VC) 3D interaction (3DI) technique mitigated some of these problems, but this result was from a tethered glove-based system with users analyzing simulated data. Here, we redesigned the VC by using untethered bare-hand interaction with real volume datasets, with a broader aim of adoption of this technique in research labs. We developed symmetric and asymmetric interfaces for the bare-hand VC (BHVC) through design iterations with a biomechanics scientist. We evaluated our asymmetric BHVC technique against standard 2D and widely used 3DI techniques with experts analyzing scanned beetle datasets. We found that our BHVC design significantly outperformed the other two techniques. This study contributes a practical 3DI design for scientists, documents lessons learned while redesigning for bare-hand trackers and provides evidence suggesting that 3DI could improve volume data analysis for a variety of visual analysis tasks. Our contribution is in the realm of 3D user interfaces tightly integrated with visualization for improving the effectiveness of visual analysis of volume datasets. Based on our experience, we also provide some insights into hardware-agnostic principles for design of effective interaction techniques.
- Building Stem Career Interest Through Curriculum TreatmentsPeterson, Bryanne (Springer Open, 2020)Watson and McMahon’s (2005) work identified a need for research to examine the what and how of children’s career development learning; this research is a start to answering that call, specifically focusing on STEM career interest as a precursor to development due to the current needs nationally for an increase in the STEM pipeline. This study examined the impacts of design-based learning and scientific inquiry curriculum treatments with embedded career content on the career interest of fifth-grade students as compared to traditional classroom methods. Findings show an upward trend in interest with the use of these curriculum treatments, though the change is not significant in most career areas, likely due to the short time period of the unit and/or small n.
- A child-robot musical theater afterschool program for promoting STEAM education: A case study and guidelinesDong, Jia; Choi, Koeun; Yu, Shuqi; Lee, Yeaji; Kim, Jisun; Vajir, Devanshu; Haines, Chelsea; Newbill, Phyllis; Wyatt, Ariana; Upthegrove, Tanner; Jeon, Myounghoon (Taylor & Francis, 2023-03-16)With the advancements of machine learning and AI technologies, robots have been more widely used in our everyday life and they have also been used in education. The present study introduces a 12-week child-robot theater afterschool program designed to promote science, technology, engineering, and mathematics (STEM) education with art elements (STEAM) for elementary students using social robots. Four modules were designed to introduce robot mechanisms as well as arts: Acting (anthropomorphism), Dance (robot movements), Music and Sounds (music composition), and Drawing (robot art). These modules provided children with basic knowledge about robotics and STEM and guided children to create a live robot theater play. A total of 16 students participated in the program, and 11 of them were involved in completing questionnaires and interviews regarding their perceptions towards robots, STEAM, and the afterschool program. Four afterschool program teachers participated in interviews, reflecting their perceptions of the program and observations of children’s experiences during the program. Our findings suggest that the present program effectively maintained children’s engagement and improved their interest in STEAM by connecting social robots and theater production. We conclude with design guidelines and recommendations for future research and programs.
- Consistency of Sedentary Behavior Patterns among Office Workers with Long-Term Access to Sit-Stand WorkstationsHuysmans, Maaike A.; Srinivasan, Divya; Mathiassen, Svend Erik (Oxford University Press, 2019-04-22)Introduction: Sit-stand workstations are a popular intervention to reduce sedentary behavior (SB) in office settings. However, the extent and distribution of SB in office workers long-term accustomed to using sit-stand workstations as a natural part of their work environment are largely unknown. In the present study, we aimed to describe patterns of SB in office workers with long-term access to sit-stand workstations and to determine the extent to which these patterns vary between days and workers. Methods: SB was objectively monitored using thigh-worn accelerometers for a full week in 24 office workers who had been equipped with a sit-stand workstation for at least 10 months. A comprehensive set of variables describing SB was calculated for each workday and worker, and distributions of these variables between days and workers were examined. Results: On average, workers spent 68% work time sitting [standard deviation (SD) between workers and between days (within worker): 10.4 and 18.2%]; workers changed from sitting to standing/ walking 3.2 times per hour (SDs 0.6 and 1.2 h−1); with bouts of sitting being 14.9 min long (SDs 4.2 and 8.5 min). About one-third of the workers spent >75% of their workday sitting. Between-workers variability was significantly different from zero only for percent work time sitting, while betweendays (within-worker) variability was substantial for all SB variables. Conclusions: Office workers accustomed to using sit-stand workstations showed homogeneous patterns of SB when averaged across several days, except for percent work time seated. However, SB differed substantially between days for any individual worker. The finding that many workers were extensively sedentary suggests that just access to sit-stand workstations may not be a sufficient remedy against SB; additional personalized interventions reinforcing use may be needed. To this end, differences in SB between days should be acknowledged as a potentially valuable source of variation.
- The Effects of Incorrect Occlusion Cues on the Understanding of Barehanded Referencing in Collaborative Augmented RealityLi, Yuan; Hu, Donghan; Wang, Boyuan; Bowman, Douglas A.; Lee, Sang Won (Frontiers, 2021-07-01)In many collaborative tasks, the need for joint attention arises when one of the users wants to guide others to a specific location or target in space. If the collaborators are co-located and the target position is in close range, it is almost instinctual for users to refer to the target location by pointing with their bare hands. While such pointing gestures can be efficient and effective in real life, performance will be impacted if the target is in augmented reality (AR), where depth cues like occlusion may be missing if the pointer’s hand is not tracked and modeled in 3D. In this paper, we present a study utilizing head-worn AR displays to examine the effects of incorrect occlusion cues on spatial target identification in a collaborative barehanded referencing task. We found that participants’ performance in AR was reduced compared to a real-world condition, but also that they developed new strategies to cope with the limitations of AR. Our work also identified mixed results of the effect of spatial relationships between users.
- Exploring Effect of Level of Storytelling Richness on Science Learning in Interactive and Immersive Virtual RealityZhang, Lei; Bowman, Douglas A. (ACM, 2022-06-21)Immersive and interactive storytelling in virtual reality (VR) is an emerging creative practice that has been thriving in recent years. Educational applications using immersive VR storytelling to explain complex science concepts have very promising pedagogical benefts because on the one hand, storytelling breaks down the complexity of science concepts by bridging them to people’s everyday experiences and familiar cognitive models, and on the other hand, the learning process is further reinforced through rich interactivity aforded by the VR experiences. However, it is unclear how diferent amounts of storytelling in an interactive VR storytelling experience may afect learning outcomes due to a paucity of literature on educational VR storytelling research. This preliminary study aims to add to the literature through an exploration of variations in the designs of essential storytelling elements in educational VR storytelling experiences and their impact on the learning of complex immunology concepts.
- Exploring Spatial UI Transition Mechanisms with Head-Worn Augmented RealityLu, Feiyu; Xu, Yan (ACM, 2022-04-29)Imagine in the future people comfortably wear augmented reality (AR) displays all day, how do we design interfaces that adapt to the contextual changes as people move around? In current operating systems, the majority of AR content defaults to staying at a fxed location until being manually moved by the users. However, this approach puts the burden of user interface (UI) transition solely on users. In this paper, we frst ran a bodystorming design workshop to capture the limitations of existing manual UI transition approaches in spatially diverse tasks. Then we addressed these limitations by designing and evaluating three UI transition mechanisms with different levels of automation and controllability (low-efort manual, semi-automated, fully-automated). Furthermore, we simulated imperfect contextual awareness by introducing prediction errors with diferent costs to correct them. Our results provide valuable lessons about the trade-ofs between UI automation levels, controllability, user agency, and the impact of prediction errors.
- Force Push: Exploring Expressive Gesture-to-Force Mappings for Remote Object Manipulation in Virtual RealityYu, Run; Bowman, Douglas A. (Frontiers Media, 2018-09-28)This paper presents Force Push, a novel gesture-based interaction technique for remote object manipulation in virtual reality (VR). Inspired by the design of magic powers in popular culture, Force Push uses intuitive hand gestures to drive physics-based movement of the object. Using a novel algorithm that dynamically maps rich features of hand gestures to the properties of the physics simulation, both coarse-grained ballisticmovements and fine-grained refinementmovements can be achieved seamlessly and naturally. An initial user study of a limited translation task showed that, although its gesture-to-force mapping is inherently harder to control than traditional position-to-position mappings, Force Push is usable even for extremely difficult tasks. Direct position-to-position control outperformed Force Push when the initial distance between the object and the target was close relative to the required accuracy; however, the gesture-based method began to show promising results when they were far away from each other. As for subjective user experience, Force Push was perceived as more natural and fun to use, even though its controllability and accuracy were thought to be inferior to direct control. This paper expands the design space of object manipulation beyond mimicking reality, and provides hints on using magical gestures and physics-based techniques for higher usability and hedonic qualities in user experience.
- Immersive Analytics: Theory and Research AgendaSkarbez, Richard; Polys, Nicholas F.; Ogle, J. Todd; North, Christopher L.; Bowman, Douglas A. (Frontiers, 2019-09-10)Advances in a variety of computing fields, including "big data," machine learning, visualization, and augmented/mixed/virtual reality, have combined to give rise to the emerging field of immersive analytics, which investigates how these new technologies support analysis and decision making. Thus far, we feel that immersive analytics research has been somewhat ad hoc, possibly owing to the fact that there is not yet an organizing framework for immersive analytics research. In this paper, we address this lack by proposing a definition for immersive analytics and identifying some general research areas and specific research questions that will be important for the development of this field. We also present three case studies that, while all being examples of what we would consider immersive analytics, present different challenges, and opportunities. These serve to demonstrate the breadth of immersive analytics and illustrate how the framework proposed in this paper applies to practical research.
- Inclusion of Clinicians in the Development and Evaluation of Clinical Artificial Intelligence Tools: A Systematic Literature ReviewJesso, Stephanie Tulk; Kelliher, Aisling; Sanghavi, Harsh; Martin, Thomas; Parker, Sarah H. (Frontiers, 2022-04-07)The application of machine learning (ML) and artificial intelligence (AI) in healthcare domains has received much attention in recent years, yet significant questions remain about how these new tools integrate into frontline user workflow, and how their design will impact implementation. Lack of acceptance among clinicians is a major barrier to the translation of healthcare innovations into clinical practice. In this systematic review, we examine when and how clinicians are consulted about their needs and desires for clinical AI tools. Forty-five articles met criteria for inclusion, of which 24 were considered design studies. The design studies used a variety of methods to solicit and gather user feedback, with interviews, surveys, and user evaluations. Our findings show that tool designers consult clinicians at various but inconsistent points during the design process, and most typically at later stages in the design cycle (82%, 19/24 design studies). We also observed a smaller amount of studies adopting a human-centered approach and where clinician input was solicited throughout the design process (22%, 5/24). A third (15/45) of all studies reported on clinician trust in clinical AI algorithms and tools. The surveyed articles did not universally report validation against the “gold standard” of clinical expertise or provide detailed descriptions of the algorithms or computational methods used in their work. To realize the full potential of AI tools within healthcare settings, our review suggests there are opportunities to more thoroughly integrate frontline users’ needs and feedback in the design process.
- Move the Object or Move Myself? Walking vs. Manipulation for the Examination of 3D Scientific DataLages, Wallace S.; Bowman, Douglas A. (Frontiers, 2018-07-10)Physical walking is consistently considered a natural and intuitive way to acquire viewpoints in a virtual environment. However, research findings also show that walking requires cognitive resources. To understand how this tradeoff affects the interaction design for virtual environments; we evaluated the performance of 32 participants, ranging from 18 to 44 years old, in a demanding visual and spatial task. Participants wearing a virtual reality (VR) headset counted features in a complex 3D structure while walking or while using a 3D interaction technique for manipulation. Our results indicate that the relative performance of the interfaces depends on the spatial ability and game experience of the participants. Participants with previous game experience but low spatial ability performed better using the manipulation technique. However, walking enabled higher performance for participants with low spatial ability and without significant game experience. These findings suggest that the optimal design choices for demanding visual tasks in VR should consider both controller experience and the spatial ability of the target users.
- Photo Steward: A Deliberative Collective Intelligence Workflow for Validating Historical ArchivesMohanty, Vikram; Luther, Kurt (ACM, 2023-11-06)Historical photographs of people generate significant cultural and economic value, but correctly identifying the subjects of photos can be a difficult task, requiring careful attention to detail while synthesizing large amounts of data from diverse sources. When photos are misidentified, the negative consequences can include financial losses and inaccuracies in the historical record, and even the spread of mis- and disinformation. To address this challenge, we introduce Photo Steward, an information stewardship architecture that leverages a deliberative workflow for validating historical photo IDs. We explored Photo Steward in the context of Civil War Photo Sleuth (CWPS), a popular online community dedicated to identifying photos from the American Civil War era (1861–65) using facial recognition and crowdsourcing. While the platform has been successful in identifying hundreds of unknown photographs, there have been concerns about unverified identifications and misidentifications. Our exploratory evaluation of Photo Steward on CWPS showed that its validation workflow encouraged users to deliberate while making photo ID decisions. Further, its stewardship visualizations helped users to assess photo ID information accurately, while fostering diverse forms of stigmergic collaboration.
- Read-Agree-Predict: A Crowdsourced Approach to Discovering Relevant Primary Sources for HistoriansWang, Nai-Ching; Hicks, David; Quigley, Paul; Luther, Kurt (Human Computation Institute, 2019)Historians spend significant time looking for relevant, high-quality primary sources in digitized archives and through web searches. One reason this task is time-consuming is that historians’ research interests are often highly abstract and specialized. These topics are unlikely to be manually indexed and are difficult to identify with automated text analysis techniques. In this article, we investigate the potential of a new crowdsourcing model in which the historian delegates to a novice crowd the task of labeling the relevance of primary sources with respect to her unique research interests. The model employs a novel crowd workflow, Read-Agree-Predict (RAP), that allows novice crowd workers to label relevance as well as expert historians. As a useful byproduct, RAP also reveals and prioritizes crowd confusions as targeted learning opportunities. We demonstrate the value of our model with two experiments with paid crowd workers (n=170), with the future goal of extending our work to classroom students and public history interventions. We also discuss broader implications for historical research and education.
- Reimagining medical workspaces through on-site observations and bodystormingIshida, Aki; Martin, Thomas; Gracanin, Denis; Franusich, David; Buck, Carl; Parker, Sarah H.; Knapp, R. Benjamin; Haley, Vince; Zagarese, Vivian; Tasooji, Reza (2023-01)Clinicians in acute care hospitals experience highly stressful situations daily. They work long, variable hours, complete complex technical tasks, and must also be emotionally engaged with patients and families to meet the caring demands of this profession, which can lead to burnout. In response to these challenges, a multi-disciplinary team from Virginia Tech collaborated with Steelcase to study the impact of medical workspaces on the clinician experience and how those workspaces could be improved to reduce some of the sources of burnout. The team sought to identify conditions that could either aid or hinder clinician workflow and affect burnout rate, then based on interviews and in-situ ethnographic studies, generated design concepts for nurse stations, both centralized and mobile. Using digital and physical full-scale prototypes, we enacted clinical care scenarios to seek feedback and reflect on the design.
- Relative Effects of Real-world and Virtual-World Latency on an Augmented Reality Training Task: An AR Simulation ExperimentNabiyouni, Mahdi; Scirbo, Siroberto; Bowman, Douglas A.; Höllerer, Tobias (Frontiers Media, 2017-01-30)In augmented reality (AR), virtual objects and information are overlaid onto the user’s view of the physical world and can appear to become part of the real-world. Accurate registration of virtual objects is a key requirement for an effective and natural AR system, but misregistration can break the illusion of virtual objects being part of the real-world and disrupt immersion. End-to-end system latency severely impacts the quality of AR registration. In this research, we present a controlled study that aims at a deeper understanding of the effects of latency on virtual and real-world imagery and its influences on task performance in an AR training task. We utilize an AR simulation approach, in which an outdoor AR training task is simulated in a high-fidelity virtual reality (VR) system. The real and augmented portions of the AR training scenarios are simulated in VR, affording us detailed control over a variety of immersion parameters and the ability to explore the effects of different types of simulated latency. We utilized a representative task inspired by outdoor AR military training systems to compare various AR system configurations, including optical see-through and video see-through setups with both matched and unmatched levels of real and virtual objects latency. Our findings indicate that users are able to perform significantly better when virtual and real-world latencies are matched (as in the case of simulated video see-through AR with perfect augmentation-to-real-world registration). Unequal levels of latency led to reduction in performance, even when overall latency levels were lower compared to the matched case. The relative results hold up with increased overall latency.
- Reluctant to Share: How Third Person Perceptions of Fake News Discourage News Readers From Sharing “Real News” on Social MediaYang, Fan; Horning, Michael A. (Sage, 2020)Rampant fake news on social media has drawn significant attention. Yet, much remains unknown as to how such imbalanced evaluations of self versus others could shape social media users’ perceptions and their subsequent attitudes and behavioral intentions regarding social media news. An online survey (N = 335) was conducted to examine the third person effect (TPE) in fake news on social media and suggested that users perceived a greater influence of fake news on others than on themselves. However, although users evaluated fake news as socially undesirable, they were still unsupportive of government censorship as a remedy. In addition, the perceived prevalence of fake news leads audiences to reported significantly less willingness to share all news on social media either online or offline.
- A silent spring, or a new cacophony? Invasive plants as maestros of modern soundscapesBarney, Jacob N.; O'Malley, Grace; Ripa, Gabrielle N.; Drake, Joseph; Franusich, David; Mims, Meryl C. (Wiley, 2024-04-01)Sound plays a key role in ecosystem function and is a defining part of how humans experience nature. In the seminal book Silent Spring (Carson 1962), Rachel Carson warned of the ecological and environmental harm of pesticide usage by envisioning a future without birdsong. Soundscapes, or the acoustic patterns of a landscape through space and time, encompass both biological and physical processes (Pijanowski et al. 2011). Yet, they are often an underappreciated element of the natural world and the ways in which it is perceived. Scientists are only beginning to quantify changes to soundscapes, largely in response to anthropogenic sounds, but soundscape alteration is likely linked to many dimensions of global change. For example, invasive non-native species (hereafter, invasive species) are near-ubiquitous members of ecosystems globally and threaten both natural and managed ecosystems at great expense. Their impacts to soundscapes may be an important, yet largely unknown, threat to ecosystems and the human and economic systems they support.
- TAGGAR: General-Purpose Task Guidance from Natural Language in Augmented Reality using Vision-Language ModelsStover, Daniel; Bowman, Douglas A. (ACM, 2024-10-07)Augmented reality (AR) task guidance systems provide assistance for procedural tasks by rendering virtual guidance visuals within the real-world environment. Current AR task guidance systems are limited in that they require AR system experts to manually place visuals, require models of real-world objects, or only function for limited tasks or environments. We propose a general-purpose AR task guidance approach for tasks defined by natural language. Our approach allows an operator to take pictures of relevant objects and write task instructions for an end user, which are used by the system to determine where to place guidance visuals. Then, an end user can receive and follow guidance even if objects change locations or environments. Our approach utilizes current visionlanguage machine learning models for text and image semantic understanding and object localization. We built a proof-of-concept system called TAGGAR using our approach and tested its accuracy and usability in a user study. We found that all operators were able to generate clear guidance for tasks and end users were able to follow the guidance visuals to complete the expected action 85.7% of the time without any knowledge of the tasks.
- Tapping into community expertise: stakeholder engagement in the design processMorshedzadeh, Elham; Dunkenberger, Mary Beth; Nagle, Lara; Ghasemi, Shiva; York, Laura; Horn, Kimberly (Taylor & Francis, 2022-10)The Connection to Care (C2C) project, a transdisciplinary work-in-progress, employs community-engaged participatory research and design methods at the nexus of policy adaptation and product innovations. C2C aims to advance practices that identify and leverage the critical junctures at which people with substance use disorder (SUD) seek lifesaving services and treatment, utilizing stakeholder input in all stages of design and development. Beginning in the Fall of 2018, members of our research team engaged with those at the forefront of the addiction crisis, including first responders, harm reduction and peer specialists, treatment providers, and individuals in recovery and in active substance use in a community greatly impacted by SUD. Through this engagement, the concept for programs and products representing a connection to care emerged, including the design of a backpack to meet the needs of individuals with SUD and those experiencing homelessness. From 2020 to 2022, more than 1,200 backpacks with lifesaving and self-care supplies have been distributed in local communities, as one component of the overall C2C initiative. The backpack is a recognized symbol of the program and has served as an impetus for further program and policy explorations, including as a lens to better understand the role of ongoing stigma. Though addiction science has evolved significantly in the wake of the opioid epidemic, artifacts of policies and practices that criminalize and stigmatize SUD remain as key challenges. This paper explains the steps that C2C has taken to address these challenges, and to empower a community that cares.