Browsing by Author "Lisle, Lee"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- CrowdLayout: Crowdsourced Design and Evaluation of Biological Network VisualizationsSingh, Divit P.; Lisle, Lee; Murali, T. M.; Luther, Kurt (ACM, 2018-04)Biologists often perform experiments whose results generate large quantities of data, such as interactions between molecules in a cell, that are best represented as networks (graphs). To visualize these networks and communicate them in publications, biologists must manually position the nodes and edges of each network to reflect their real-world physical structure. This process does not scale well, and graph layout algorithms lack the biological underpinnings to offer a viable alternative. In this paper, we present CrowdLayout, a crowdsourcing system that leverages human intelligence and creativity to design layouts of biological network visualizations. CrowdLayout provides design guidelines, abstractions, and editing tools to help novice workers perform like experts. We evaluated CrowdLayout in two experiments with paid crowd workers and real biological network data, finding that crowds could both create and evaluate meaningful, high-quality layouts. We also discuss implications for crowdsourced design and network visualizations in other domains.
- Effects of Volumetric Augmented Reality Displays on Human Depth Judgments: Implications for Heads-Up Displays in TransportationLisle, Lee; Merenda, Coleman; Tanous, Kyle; Kim, Hyungil; Gabbard, Joseph L.; Bowman, Douglas A. (IGI Global, 2019)Many driving scenarios involve correctly perceiving road elements in depth and manually responding as appropriate. Of late, augmented reality (AR) head-up displays (HUDs) have been explored to assist drivers in identifying road elements, by using a myriad of AR interface designs that include world-fixed graphics perceptually placed in the forward driving scene. Volumetric AR HUDs purportedly offer increased accuracy of distance perception through natural presentation of oculomotor cues as compared to traditional HUDs. In this article, the authors quantify participant performance matching virtual objects to real-world counterparts at egocentric distances of 7-12 meters while using both volumetric and fixed-focal plane AR HUDs. The authors found the volumetric HUD to be associated with faster and more accurate depth judgements at far distance, and that participants performed depth judgements more quickly as the experiment progressed. The authors observed no differences between the two displays in terms of reported simulator sickness or eye strain.
- A Longitudinal Exploration of Sensemaking Strategies in Immersive Space to ThinkDavidson, Kylie; Lisle, Lee; Whitley, Kirsten; Bowman, Douglas A.; North, Christopher L. (IEEE, 2022-09)Existing research on immersive analytics to support the sensemaking process focuses on single-session sensemaking tasks. However, in the wild, sensemaking can take days or months to complete. In order to understand the full benefits of immersive analytic systems, we need to understand how immersive analytic systems provide flexibility for the dynamic nature of the sensemaking process. In our work, we build upon an existing immersive analytic system – Immersive Space to Think, to evaluate how immersive analytic systems can support sensemaking tasks over time. We conducted a user study with eight participants with three separate analysis sessions each. We found significant differences between analysis strategies between sessions one, two, and three, which suggest that immersive space to think can benefit analysts during multiple stages in the sensemaking process.