Browsing by Author "Lu, Feiyu"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
- Collaborative behavior, performance and engagement with visual analytics tasks using mobile devicesChen, Lei; Liang, Hai-Ning; Lu, Feiyu; Papangelis, Konstantinos; Man, Ka L.; Yue, Yong (2020-11-22)Interactive visualizations are external tools that can support users’ exploratory activities. Collaboration can bring benefits to the exploration of visual representations or visualizations. This research investigates the use of co-located collaborative visualizations in mobile devices, how users working with two different modes of interaction and view (Shared or Non-Shared) and how being placed at various position arrangements (Corner-to-Corner, Face-to-Face, and Side-by-Side) affect their knowledge acquisition, engagement level, and learning efficiency. A user study is conducted with 60 participants divided into 6 groups (2 modes × 3 positions) using a tool that we developed to support the exploration of 3D visual structures in a collaborative manner. Our results show that the shared control and view version in the Side-by-Side position is the most favorable and can improve task efficiency. In this paper, we present the results and a set of recommendations that are derived from them.
- Effect of Collaboration Mode and Position Arrangement on Immersive Analytics Tasks in Virtual Reality: A Pilot StudyChen, Lei; Liang, Hai-Ning; Lu, Feiyu; Wang, Jialin; Chen, Wenjun; Yue, Yong (MDPI, 2021-11-08)[Background] Virtual reality (VR) technology can provide unique immersive experiences for group users, and especially for analytics tasks with visual information in learning. Providing a shared control/view may improve the task performance and enhance the user experience during VR collaboration. [Objectives] Therefore, this research explores the effect of collaborative modes and user position arrangements on task performance, user engagement, and collaboration behaviors and patterns in a VR learning environment that supports immersive collaborative tasks. [Method] The study involved two collaborative modes (shared and non-shared view and control) and three position arrangements (side-by-side, corner-to-corner, and back-to-back). A user study was conducted with 30 participants divided into three groups (Single, Shared, and Non-Shared) using a VR application that allowed users to explore the structural and transformational properties of 3D geometric shapes. [Results] The results showed that the shared mode would lead to higher task performance than single users for learning analytics tasks in VR. Besides, the side-by-side position got a higher score and more favor for enhancing the collaborative experience. [Conclusion] The shared view would be more suitable for improving task performance in collaborative VR. In addition, the side-by-side position may provide a higher user experience when collaborating in learning VR. From these results, a set of guidelines for the design of collaborative visualizations for VR environments are distilled and presented at the end of the paper. All in all, although our experiment is based on a colocated setting with two users, the results are applicable to both colocated and distributed collaborative scenarios with two or more users.
- Exploring Spatial UI Transition Mechanisms with Head-Worn Augmented RealityLu, Feiyu; Xu, Yan (ACM, 2022-04-29)Imagine in the future people comfortably wear augmented reality (AR) displays all day, how do we design interfaces that adapt to the contextual changes as people move around? In current operating systems, the majority of AR content defaults to staying at a fxed location until being manually moved by the users. However, this approach puts the burden of user interface (UI) transition solely on users. In this paper, we frst ran a bodystorming design workshop to capture the limitations of existing manual UI transition approaches in spatially diverse tasks. Then we addressed these limitations by designing and evaluating three UI transition mechanisms with different levels of automation and controllability (low-efort manual, semi-automated, fully-automated). Furthermore, we simulated imperfect contextual awareness by introducing prediction errors with diferent costs to correct them. Our results provide valuable lessons about the trade-ofs between UI automation levels, controllability, user agency, and the impact of prediction errors.
- Glanceable AR: Towards a Pervasive and Always-On Augmented Reality FutureLu, Feiyu (Virginia Tech, 2023-07-06)Augmented reality head-worn displays (AR HWDs) have the potential to assist personal computing and the acquisition of everyday information. With advancements in hardware and tracking, these devices are becoming increasingly lightweight and powerful. They could eventually have the same form factor as normal pairs of eyeglasses, be worn all-day, overlaying information pervasively on top of the real-world anywhere and anytime to continuously assist people’s tasks. However, unlike traditional mobile devices, AR HWDs are worn on the head and always visible. If designed without care, the displayed virtual information could also be distracting, overwhelming, and take away the user’s attention from important real- world tasks. In this dissertation, we research methods for appropriate information displays and interactions with future all-day AR HWDs by seeking answers to four questions: (1) how to mitigate distractions of AR content to the users; (2) how to prevent AR content from occluding the real-world environment; (3) how to support scalable on-the-go access to AR content; and (4) how everyday users perceive using AR systems for daily information acquisition tasks. Our work builds upon a theory we developed called Glanceable AR, in which digital information is displayed outside the central field of view of the AR display to minimize distractions, but can be accessed through a quick glance. Through five projects covering seven studies, this work provides theoretical and empirical knowledge to prepare us for a pervasive yet unobtrusive everyday AR future, in which the overlaid AR information is easily accessible, non-invasive, responsive, and supportive.
- In-the-Wild Experiences with an Interactive Glanceable AR System for Everyday UseLu, Feiyu; Pavanatto, Leonardo; Bowman, Douglas A. (ACM, 2023-10-13)Augmented reality head-worn displays (AR HWDs) of the near future will be worn all day every day, delivering information to users anywhere and anytime. Recent research has explored how information can be presented on AR HWDs to facilitate easy acquisition without intruding on the user’s physical tasks. However, it remains unclear what users would like to do beyond passive viewing of information, and what are the best ways to interact with everyday content displayed in AR HWDs. To address this gap, our research focuses on the implementation of a functional prototype that leverages the concept of Glanceable AR while incorporating various interaction capabilities for users to take quick actions on their personal information. Instead of being overwhelmed and continuously attentive to virtual information, our system centers around the idea that virtual information should stay invisible and unobtrusive when not needed but is quickly accessible and interactable. Through an in-the-wild study involving three AR experts, our findings shed light on how to design interactions in AR HWDs to support everyday tasks, as well as how people perceive using feature-rich Glanceable AR interfaces during social encounters.
- User-elicited dual-hand interactions for manipulating 3D objects in virtual reality environmentsNanjappan, Vijayakumar; Liang, Hai-Ning; Lu, Feiyu; Papangelis, Konstantinos; Yue, Yong; Man, Ka L. (2018-10-29)Virtual reality technologies (VR) have advanced rapidly in the last few years. Prime examples include the Oculus RIFT and HTC Vive that are both head-worn/mounted displays (HMDs). VR HMDs enable a sense of immersion and allow enhanced natural interaction experiences with 3D objects. In this research we explore suitable interactions for manipulating 3D objects when users are wearing a VR HMD. In particular, this research focuses on a user-elicitation study to identify natural interactions for 3D manipulation using dual-hand controllers, which have become the standard input devices for VR HMDs. A user elicitation study requires potential users to provide interactions that are natural and intuitive based on given scenarios. The results of our study suggest that users prefer interactions that are based on shoulder motions (e.g., shoulder abduction and shoulder horizontal abduction) and elbow flexion movements. In addition, users seem to prefer one-hand interaction, and when two hands are required they prefer interactions that do not require simultaneous hand movements, but instead interactions that allow them to alternate between their hands. Results of our study are applicable to the design of dual-hand interactions with 3D objects in a variety of virtual reality environments.