Browsing by Author "Endert, Alex"
Now showing 1 - 7 of 7
Results Per Page
Sort Options
- Co-located Collaboration on a Large, High-Resolution DisplayVogt, Katherine; North, Christopher L.; Andrews, Christopher; Endert, Alex (2010)Few have studied co-located collaboration, let alone co-located collaboration and the sensemaking process. Here, we define co-located collaboration as multiple users working on the same display. Intelligence analysts often must filter through massive amounts of data which may contain large portions of text. As the benefits of collaboration [1] and large displays [2] have aheady separately proven themselves, we chose to examine the sensemaking process when these two aspects are combined. The environment we created also included multiple penwwl input devices to create a multiuser workspace. By observing the user roles adopted, collaborative processes, organization of the space, and perceived ownership or sharing of territory on the display. We hope to contribute valuable insight into the design implications of software.
- Large Display Interaction via Multiple Acceleration Curves and Multifinger Pointer ControlEsakia, Andrey; Endert, Alex; North, Christopher L. (Hindawi, 2014-11-25)Large high-resolution displays combine high pixel density with ample physical dimensions. The combination of these factors creates a multiscale workspace where interactive targeting of on-screen objects requires both high speed for distant targets and high accuracy for small targets. Modern operating systems support implicit dynamic control-display gain adjustment (i.e., a pointer acceleration curve) that helps to maintain both speed and accuracy. However, large high-resolution displays require a broader range of control-display gains than a single acceleration curve can usably enable. Some interaction techniques attempt to solve the problem by utilizing multiple explicit modes of interaction, where different modes provide different levels of pointer precision. Here, we investigate the alternative hypothesis of using a single mode of interaction for continuous pointing that enables both (1) standard implicit granularity control via an acceleration curve and (2) explicit switching between multiple acceleration curves in an efficient and dynamic way. We evaluate a sample solution that augments standard touchpad accelerated pointer manipulation with multitouch capability, where the choice of acceleration curve dynamically changes depending on the number of fingers in contact with the touchpad. Specifically, users can dynamically switch among three different acceleration curves by using one, two, or three fingers on the touchpad.
- Large High Resolution Displays for Co-Located Collaborative Intelligence AnalysisBradel, Lauren; Andrews, Christopher; Endert, Alex; Koch, Kristen; Vogt, Katherine; Hutchings, Duke; North, Christopher L. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2011-11-01)Large, high-resolution vertical displays carry the potential to increase the accuracy of collaborative sensemaking, given correctly designed visual analytics tools. From an exploratory user study using a fictional intelligence analysis task, we investigated how users interact with the display to construct spatial schemas and externalize information, as well as how they establish shared and private territories. We investigated the spatial strategies of users partitioned by tool type used (document- or entity-centric). We classified the types of territorial behavior exhibited in terms of how the users interacted with the display (integrated or independent workspaces). Next, we examined how territorial behavior impacted the common ground between the pairs of users. Finally, we recommend design guidelines for building co-located collaborative visual analytics tools specifically for use on large, high-resolution vertical displays.
- Semantic Interaction for Visual Analytics: Inferring Analytical Reasoning for Model SteeringEndert, Alex (Virginia Tech, 2012-07-10)User interaction in visual analytic systems is critical to enabling visual data exploration. Through interacting with visualizations, users engage in sensemaking, a process of developing and understanding relationships within datasets through foraging and synthesis. For example, two-dimensional layouts of high-dimensional data can be generated by dimension reduction models, and provide users with an overview of the relationships between information. However, exploring such spatializations can require expertise with the internal mechanisms and parameters of these models. The core contribution of this work is semantic interaction, capable of steering such models without requiring expertise in dimension reduction models, but instead leveraging the domain expertise of the user. Semantic interaction infers the analytical reasoning of the user with model updates, steering the dimension reduction model for visual data exploration. As such, it is an approach to user interaction that leverages interactions designed for synthesis, and couples them with the underlying mathematical model to provide computational support for foraging. As a result, semantic interaction performs incremental model learning to enable synergy between the user's insights and the mathematical model. The contributions of this work are organized by providing a description of the principles of semantic interaction, providing design guidelines through the development of a visual analytic prototype, ForceSPIRE, and the evaluation of the impact of semantic interaction on the analytic process. The positive results of semantic interaction open a fundamentally new design space for designing user interactions in visual analytic systems. This research was funded in part by the National Science Foundation, CCF-0937071 and CCF-0937133, the Institute for Critical Technology and Applied Science at Virginia Tech, and the National Geospatial-Intelligence Agency contract #HMI1582-05-1-2001.
- Space for Two to Think: Large, High-Resolution Displays for Co-located Collaborative SensemakingBradel, Lauren; Andrews, Christopher; Endert, Alex; Vogt, Katherine; Hutchings, Duke; North, Christopher L. (Department of Computer Science, Virginia Polytechnic Institute & State University, 2011-06-01)Large, high-resolution displays carry the potential to enhance single display groupware collaborative sensemaking for intelligence analysis tasks by providing space for common ground to develop, but it is up to the visual analytics tools to utilize this space effectively. In an exploratory study, we compared two tools (Jigsaw and a document viewer), which were adapted to support multiple input devices, to observe how the large display space was used in establishing and maintaining common ground during an intelligence analysis scenario using 50 textual documents. We discuss the spatial strategies employed by the pairs of participants, which were largely dependent on tool type (data-centric or function-centric), as well as how different visual analytics tools used collaboratively on large, high-resolution displays impact common ground in both process and solution. Using these findings, we suggest design considerations to enable future co-located collaborative sensemaking tools to take advantage of the benefits of collaborating on large, high-resolution displays.
- Virginia Tech Student PresentationsEndert, Alex; Sathre, Paul (2013-05-06)Sathre: In today's society we don't realize that parallel computing is everywhere. Parallel computing is no longer something that is just in supercomputers. Multiple devices such as laptops, tablets, and even cell phones take advantage of parallel computing nowadays. In this presentation Sathre gives an overview of how an accelerator(GPU) is used in comparison to CPU. He also talks about translation from CUDA to OpenCL. Endert's Website: http://people.cs.vt.edu/aendert/Alex_Endert/Home.html
- Visual to Parametric Interaction (V2PI)Leman, Scotland C.; House, Leanna L.; Maiti, Dipayan; Endert, Alex; North, Christopher L. (PLOS, 2013-03-20)Typical data visualizations result from linear pipelines that start by characterizing data using a model or algorithm to reduce the dimension and summarize structure, and end by displaying the data in a reduced dimensional form. Sensemaking may take place at the end of the pipeline when users have an opportunity to observe, digest, and internalize any information displayed. However, some visualizations mask meaningful data structures when model or algorithm constraints (e.g., parameter specifications) contradict information in the data. Yet, due to the linearity of the pipeline, users do not have a natural means to adjust the displays. In this paper, we present a framework for creating dynamic data displays that rely on both mechanistic data summaries and expert judgement. The key is that we develop both the theory and methods of a new human-data interaction to which we refer as ‘‘ Visual to Parametric Interaction’’ (V2PI). With V2PI, the pipeline becomes bidirectional in that users are embedded in the pipeline; users learn from visualizations and the visualizations adjust to expert judgement. We demonstrate the utility of V2PI and a bi-directional pipeline with two examples.