Institute for Creativity, Arts, and Technology (ICAT)
Permanent URI for this community
The Institute for Creativity, Arts, and Technology is uniquely partnered with the Center for the Arts at Virginia Tech. By forging a pathway between trans-disciplinary research and art, educational innovation, and scientific and commercial discovery, the institute works to foster the creative process to create new possibilities for exploration and expression through learning, discovery, and engagement.
Browse
Browsing Institute for Creativity, Arts, and Technology (ICAT) by Department "Computer Science"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- Bare-hand volume cracker for raw volume data analysisSocha, John J.; Laha, Bireswar; Bowman, Douglas A. (2016-09-28)Analysis of raw volume data generated from different scanning technologies faces a variety of challenges, related to search, pattern recognition, spatial understanding, quantitative estimation, and shape description. In a previous study, we found that the volume cracker (VC) 3D interaction (3DI) technique mitigated some of these problems, but this result was from a tethered glove-based system with users analyzing simulated data. Here, we redesigned the VC by using untethered bare-hand interaction with real volume datasets, with a broader aim of adoption of this technique in research labs. We developed symmetric and asymmetric interfaces for the bare-hand VC (BHVC) through design iterations with a biomechanics scientist. We evaluated our asymmetric BHVC technique against standard 2D and widely used 3DI techniques with experts analyzing scanned beetle datasets. We found that our BHVC design significantly outperformed the other two techniques. This study contributes a practical 3DI design for scientists, documents lessons learned while redesigning for bare-hand trackers and provides evidence suggesting that 3DI could improve volume data analysis for a variety of visual analysis tasks. Our contribution is in the realm of 3D user interfaces tightly integrated with visualization for improving the effectiveness of visual analysis of volume datasets. Based on our experience, we also provide some insights into hardware-agnostic principles for design of effective interaction techniques.
- The Effects of Incorrect Occlusion Cues on the Understanding of Barehanded Referencing in Collaborative Augmented RealityLi, Yuan; Hu, Donghan; Wang, Boyuan; Bowman, Douglas A.; Lee, Sang Won (Frontiers, 2021-07-01)In many collaborative tasks, the need for joint attention arises when one of the users wants to guide others to a specific location or target in space. If the collaborators are co-located and the target position is in close range, it is almost instinctual for users to refer to the target location by pointing with their bare hands. While such pointing gestures can be efficient and effective in real life, performance will be impacted if the target is in augmented reality (AR), where depth cues like occlusion may be missing if the pointer’s hand is not tracked and modeled in 3D. In this paper, we present a study utilizing head-worn AR displays to examine the effects of incorrect occlusion cues on spatial target identification in a collaborative barehanded referencing task. We found that participants’ performance in AR was reduced compared to a real-world condition, but also that they developed new strategies to cope with the limitations of AR. Our work also identified mixed results of the effect of spatial relationships between users.
- Read-Agree-Predict: A Crowdsourced Approach to Discovering Relevant Primary Sources for HistoriansWang, Nai-Ching; Hicks, David; Quigley, Paul; Luther, Kurt (Human Computation Institute, 2019)Historians spend significant time looking for relevant, high-quality primary sources in digitized archives and through web searches. One reason this task is time-consuming is that historians’ research interests are often highly abstract and specialized. These topics are unlikely to be manually indexed and are difficult to identify with automated text analysis techniques. In this article, we investigate the potential of a new crowdsourcing model in which the historian delegates to a novice crowd the task of labeling the relevance of primary sources with respect to her unique research interests. The model employs a novel crowd workflow, Read-Agree-Predict (RAP), that allows novice crowd workers to label relevance as well as expert historians. As a useful byproduct, RAP also reveals and prioritizes crowd confusions as targeted learning opportunities. We demonstrate the value of our model with two experiments with paid crowd workers (n=170), with the future goal of extending our work to classroom students and public history interventions. We also discuss broader implications for historical research and education.