Browsing by Author "Jeon, Myounghoon"
Now showing 1 - 20 of 65
Results Per Page
Sort Options
- The 4th Workshop on Localization vs. Internationalization: Creating an International Survey on Automotive User InterfacesStojmenova, Kristina; Lee, Seul Chan; De Oliveira Faria, Nayara; Schroeter, Ronald; Jeon, Myounghoon (ACM, 2022-09-17)International surveys tend to collect data on attitudes, values and behaviors towards a specific topic from users from multiple countries, providing an insight on the differences and similarities across nations, cultures or geo-political structures. Consequently, international surveys provide important information about the diversity of the user's needs, values and preferences, which have to be taken into consideration when creating products and services as widely used as the personal automobile. The workshop will focus on the design and development of an international survey on automotive user interfaces on a global scale. It will try to identify the most important aspects related to automotive user interfaces, which should be addressed in the survey. It will also prepare a strategy for its international distribution and create a plan for comprehensive data collection. Lastly, it will try to outline venues and communication channels for the survey dissemination, with the goal of achieving wide visibility.
- Am I Really Angry? The Influence of Anger Intensities on Young Drivers' BehaviorsWang, Manhua; Jeon, Myounghoon (ACM, 2023-09-18)Anger can lead to aggressive driving and other negative behaviors. While previous studies treated anger as a single dimension, the present research proposed that anger has distinct intensities and aimed to understand the effects of different anger intensities on driver behaviors. After developing the anger induction materials, we conducted a driving simulator study with 30 participants and assigned them to low, medium, and high anger intensity groups. We found that drivers with low anger intensity were not able to recognize their emotions and exhibited speeding behaviors, while drivers with medium and high anger intensities might be aware of their anger along with its adverse effects and then adjusted their longitudinal control. However, angry drivers generally exhibited compromised lateral control indicated by steering and lane-keeping behaviors. Our findings shed light on the potentially different influences of anger intensities on young drivers’ behaviors, especially the importance of anger recognition for intervention solutions.
- Bridging the Gap: Early Education on Robot and AI Ethics through the Robot Theater Platform in an Informal Learning EnvironmentMitchell, Jennifer; Dong, Jiayuan; Yu, Shuqi; Harmon, Madison; Holstein, Alethia; Shim, Joon Hyun; Choi, Koeun; Zhu, Qin; Jeon, Myounghoon (ACM, 2024-03-11)With the rapid advancement of robotics and AI, educating the next generation on ethical coexistence with these technologies is crucial. Our research explored the potential of a child-robot theater afterschool program in introducing and discussing robot and AI ethics with elementary school children. Conducted with 30 participants from a socioeconomically underprivileged school, the program blended STEM (Science, Technology, Engineering & Mathematics) with the arts, focusing on ethical issues in robotics and AI. Using interactive scenarios and a theatrical performance, the program aimed to enhance children’s understanding of major ethical issues in robotics and AI, such as bias, transparency, privacy, usage, and responsibility. Preliminary findings indicate the program’s success in engaging children in meaningful ethical discussions, demonstrating the potential of innovative, interactive educational methods in early education. This study contributes significantly to integrating ethical robotics and AI in early learning, preparing young minds for a technologically advanced and socially responsible future.
- A child-robot musical theater afterschool program for promoting STEAM education: A case study and guidelinesDong, Jia; Choi, Koeun; Yu, Shuqi; Lee, Yeaji; Kim, Jisun; Vajir, Devanshu; Haines, Chelsea; Newbill, Phyllis; Wyatt, Ariana; Upthegrove, Tanner; Jeon, Myounghoon (Taylor & Francis, 2023-03-16)With the advancements of machine learning and AI technologies, robots have been more widely used in our everyday life and they have also been used in education. The present study introduces a 12-week child-robot theater afterschool program designed to promote science, technology, engineering, and mathematics (STEM) education with art elements (STEAM) for elementary students using social robots. Four modules were designed to introduce robot mechanisms as well as arts: Acting (anthropomorphism), Dance (robot movements), Music and Sounds (music composition), and Drawing (robot art). These modules provided children with basic knowledge about robotics and STEM and guided children to create a live robot theater play. A total of 16 students participated in the program, and 11 of them were involved in completing questionnaires and interviews regarding their perceptions towards robots, STEAM, and the afterschool program. Four afterschool program teachers participated in interviews, reflecting their perceptions of the program and observations of children’s experiences during the program. Our findings suggest that the present program effectively maintained children’s engagement and improved their interest in STEAM by connecting social robots and theater production. We conclude with design guidelines and recommendations for future research and programs.
- Comparative Analysis of Facial Affect Detection AlgorithmsThomas, Ashin Marin (2020-05-22)There has been much research on facial affect detection, but many of them fall short on accurately identifying expressions, due to changes in illumination, occlusion, or noise in uncontrolled environments. Also, not much research has been conducted on implementing the algorithms using multiple datasets, varying the size of the dataset and the dimension of each image in the dataset. My ultimate goal is to develop an optimized algorithm that can be used for real-time affect detection of automated vehicles. In this study, I implemented the facial affect detection algorithms with various datasets and conducted a comparative analysis of performance across the algorithms. The algorithms implemented in the study included a Convolutional Neural Network (CNN) in Tensorflow, FaceNet using Transfer Learning, and Capsule Network. Each of these algorithms was trained using the three datasets (FER2013, CK+, and Ohio) to get the predicted results. The Capsule Network showed the best detection accuracy (99.3%) with the CK+dataset. Results are discussed with implications and future work.
- Conversational Voice Agents are Preferred and Lead to Better Driving Performance in Conditionally Automated VehiclesWang, M.; Lee, S. C.; Montavon, G.; Qin, J.; Jeon, Myounghoon (ACM, 2022-09-17)In-vehicle intelligent agents (IVIAs) can provide versatile information on vehicle status and road events and further promote user perceptions such as trust. However, IVIAs need to be constructed carefully to reduce distraction and prevent unintended consequences like overreliance, especially when driver intervention is still required in conditional automation. To investigate the effects of speech style (informative vs. conversational) and embodiment (voice-only vs. robot) of IVIAs on driver perception and performance in conditionally automated vehicles, we recruited 24 young drivers to experience four driving scenarios in a simulator. Results indicated that although robot agents received higher system response accuracy and trust scores, they were not preferred due to great visual distraction. Conversational agents were generally favored and led to better takeover quality in terms of lower speed and smaller standard deviation of lane position. Our findings provide a valuable perspective on balancing user preference and subsequent user performance when designing IVIAs.
- Cro-Create: Weaving Sound Using Crochet GesturesBruen, Jacqueline; Kwon, Henry; Jeon, Myounghoon (ACM, 2023-06-19)Cro-Create is a crochet gesture recognition sonifier for individual and collaborative use. In single user mode, Cro-Create directly scales and maps numerical palm orientation values detected by a motion sensor to sound. In dual user mode, the system affords users with additional auditory feedback by detecting when two users’ gestures are synchronized by segmenting the gestural procedure of making a stitch in crochet into three stages and utilizing a dynamic time warping algorithm to classify and recognize these stages; when the system determines that both users have produced the same gesture, the sonification is complemented by a distinguishable chord. Through this demonstration we introduce our tool to share procedural state for the physical craftmaking process, crochet, through sound.
- Designing Explainable In-vehicle Agents for Conditionally Automated Driving: A Holistic Examination with Mixed Method ApproachesWang, Manhua (Virginia Tech, 2024-08-16)Automated vehicles (AVs) are promising applications of artificial intelligence (AI). While human drivers benefit from AVs, including long-distance support and collision prevention, we do not always understand how AV systems function and make decisions. Consequently, drivers might develop inaccurate mental models and form unrealistic expectations of these systems, leading to unwanted incidents. Although efforts have been made to support drivers' understanding of AVs through in-vehicle visual and auditory interfaces and warnings, these may not be sufficient or effective in addressing user confusion and overtrust in in-vehicle technologies, sometimes even creating negative experiences. To address this challenge, this dissertation conducts a series of studies to explore the possibility of using the in-vehicle intelligent agent (IVIA) in the form of the speech user interface to support drivers, aiming to enhance safety, performance, and satisfaction in conditionally automated vehicles. First, two expert workshops were conducted to identify design considerations for general IVIAs in the driving context. Next, to better understand the effectiveness of different IVIA designs in conditionally automated driving, a driving simulator study (n=24) was conducted to evaluate four types of IVIA designs varying by embodiment conditions and speech styles. The findings indicated that conversational agents were preferred and yielded better driving performance, while robot agents caused greater visual distraction. Then, contextual inquiries with 10 drivers owning vehicles with advanced driver assistance systems (ADAS) were conducted to identify user needs and the learning process when interacting with in-vehicle technologies, focusing on interface feedback and warnings. Subsequently, through expert interviews with seven experts from AI, social science, and human-computer interaction domains, design considerations were synthesized for improving the explainability of AVs and preventing associated risks. With information gathered from the first four studies, three types of adaptive IVIAs were developed based on human-automation function allocation and investigated in terms of their effectiveness on drivers' response time, driving performance, and subjective evaluations through a driving simulator study (n=39). The findings indicated that although drivers preferred more information provided to them, their response time to road hazards might be degraded when receiving more information, indicating the importance of the balance between safety and satisfaction. Taken together, this dissertation indicates the potential of adopting IVIAs to enhance the explainability of future AVs. It also provides key design guidelines for developing IVIAs and constructing explanations critical for safer and more satisfying AVs.
- Development and Evaluation of an Assistive In-Vehicle System for Responding to Anxiety in Smart VehiclesNadri, Chihab (Virginia Tech, 2023-10-18)The integration of automated vehicle technology into our transportation infrastructure is ongoing, yet the precise timeline for the introduction of fully automated vehicles remains ambiguous. This technological transition necessitates the creation of in-vehicle displays tailored to emergent user needs and concerns. Notably, driving-induced anxiety, already a concern, is projected to assume greater significance in this context, although it remains inadequately researched. This dissertation sought to delve into the phenomenon of anxiety in driving, assess its implications in future transportation modalities, elucidate design considerations for distinct demographics like the youth and elderly, and design and evaluate an affective in-vehicle system to alleviate anxiety in automated driving through four studies. The first study involved two workshops with automotive experts, who underscored anxiety as pivotal to sustaining trust and system acceptance. The second study was a qualitative focus group analysis incorporating both young and older drivers, aiming to distill anxiety-inducing scenarios in automated driving and pinpoint potential intervention strategies and feedback modalities. This was followed by two driving simulator evaluations. The third study was observational, seeking to discern correlations among personality attributes, anxiety, and trust in automated driving systems. The fourth study employed cognitive reappraisal for anxiety reduction in automated driving. Analysis indicated the efficacy of the empathic interface leveraging cognitive reappraisal as an effective anxiety amelioration tool. Particularly in the self-efficacy reappraisal context, this influence influenced trust, user experience, and anxiety markers. Cumulatively, this dissertation provides key design guidelines for anxiety mitigation in automated driving, and highlights design elements pivotal to augmenting user experiences in scenarios where drivers relinquish vehicular control.
- Development of Shared Situation Awareness Guidelines and Metrics as Developmental and Analytical Tools for Augmented and Virtual Reality User Interface Design in Human-Machine TeamsVan Dam, Jared Martindale Mccolskey (Virginia Tech, 2023-08-21)As the frontiers and futures of work evolve, humans and machines will begin to share a more cooperative working space where collaboration occurs freely amongst the constituent members. To this end, it is then necessary to determine how information should flow amongst team members to allow for the efficient sharing and accurate interpretation of information between humans and machines. Shared situation awareness (SSA), the degree to which individuals can access and interpret information from sources other than themselves, is a useful framework from which to build design guidelines for the aforementioned information exchange. In this work, we present initial Augmented/virtual reality (AR/VR) design principles for shared situation awareness that can help designers both (1) design efficacious interfaces based on these fundamental principles, and (2) evaluate the effectiveness of candidate interface designs based on measurement tools we created via a scoping literature review. This work achieves these goals with focused studies that 1) show the importance of SSA in augmented reality-supported tasks, 2) describe design guidelines and measurement tools necessary to support SSA, and 3) validate the guidelines and measurement tools with a targeted user study that employs an SSA-derived AR interface to confirm the guidelines distilled from the literature review.
- Echofluid: An Interface for Remote Choreography Learning and Co-creation Using Machine Learning TechniquesWang, Marx; Duer, Zachary; Hardwig, Scotty; Lally, Sam; Ricard, Alayna; Jeon, Myounghoon (ACM, 2022-10-29)Born from physical activities, dance carries beyond mere body movement. Choreographers interact with audiences’ perceptions through the kinaesthetics, creativity, and expressivity of whole-body performance, inviting them to construct experience, emotion, culture, and meaning together. Computational choreography support can bring endless possibilities into this one of the most experiential and creative artistic forms. While various interactive and motion technologies have been developed and adopted to support creative choreographic processes, little work has been done in exploring incorporating machine learning in a choreographic system, and few remote dance teaching systems in particular have been suggested. In this exploratory work, we proposed Echofuid-a novel AI-based choreographic learning and support system that allows student dancers to compose their own AI models for learning, evaluation, exploration, and creation. In this poster, we present the design, development and ongoing validation process of Echofluid, and discuss the possibilities of applying machine learning in collaborative art and dance as well as the opportunities of augmenting interactive experiences between the performers and audiences with emerging technologies.
- Editorial: Contextualized Affective Interactions With RobotsJeon, Myounghoon; Park, Chung Hyuk; Kim, Yunkyung; Riener, Andreas; Mara, Martina (Frontiers, 2021-11-02)
- The Effect of Interaction Method and Vibrotactile Feedback on User Experience and Performance in the VR GamesMoon, Hye Sung (Virginia Tech, 2022-05-23)Recent hand tracking systems have contributed to enhancing user experience in the virtual environment (VE) due to its natural and intuitive interaction. In addition, wearable haptic devices are another approach to provide engaging and immersive experiences. However, controllers are still prevalent in VR (Virtual Reality) games as a main interaction device. Also, haptic devices are rare and not widely accepted by users because they get bulky to implement sophisticated haptic sensation. To overcome this issue, I conducted experiments (Study 1 and Study 2 of this Thesis) to investigate the effect of interaction method (controller and whole-hand interaction using hand tracking) and vibrotactile feedback on user experience in the VR game. In Study 1 of this Thesis, I recruited 36 participants and compared the user's sense of presence, engagement, usability, and task performance under three different conditions: (1) VR controllers, (2) hand tracking without vibrotactile feedback, and (3) hand tracking with vibrotactile feedback at fingertips through the gloves I developed. The gloves deliver vibrotactile feedback at each fingertip by vibration motors. I observed that whole-hand interaction using hand tracking enhanced the user's sense of presence, engagement, usability, and task performance. Further vibrotactile feedback increased the presence and engagement more clearly. Based on the participants' feedback, I could further modify the form factor to make it more usable in the VR game and comfortable to wear on a regular basis. In this sense, in Study 2 of this Thesis, I developed a new thimble-shape device to deliver vibrotactile feedback only at one fingertip rather than ten fingertips. Further, social VR is an emerging VR platform where multiple users can interact with one another. However, most social VR applications have not provided a sense of touch. I recruited 24 participants and conducted an experiment that explored the effects of interaction method and fingertip vibrotactile feedback on the user's sense of social presence, presence, engagement, and task performance in a cooperative VR game under four different conditions: (1) VR controllers without vibrotactile feedback, (2) VR controllers with vibrotactile feedback, (3) hand tracking without vibrotactile feedback, and (4) hand tracking with vibrotactile feedback with the fingertip vibrotactile device. The results showed that whole-hand interaction using hand tracking increased the level of presence. In addition, multiple items in the presence questionnaire indicated that vibrotactile feedback enhanced the level of presence as well. However, I could not observe the significant difference in social presence due to the unique setting of this experiment. Unlike the previous studies, my task was sufficiently cooperative, and thus, the participants felt high level of social presence regardless the conditions, which led to the ceiling effect. I also observed that there was no significant difference in engagement. Controller conditions had higher performance than hand tracking due to the technological limitations in hand tracking. Results are discussed in terms of implications for the components of interaction in the VR with hands, a touch in social VR, cooperative VR game, and practical design guidelines.
- The Effects of a Humanoid Robot's Non-lexical Vocalization on Emotion Recognition and Robot PerceptionLiu, Xiaozhen (Virginia Tech, 2023-06-30)As robots have become more pervasive in our everyday life, social aspects of robots have attracted researchers' attention. Because emotions play a key role in social interactions, research has been conducted on conveying emotions via speech, whereas little research has focused on the effects of non-speech sounds on users' robot perception. We conducted a within-subjects exploratory study with 40 young adults to investigate the effects of non-speech sounds (regular voice, characterized voice, musical sound, and no sound) and basic emotions (anger, fear, happiness, sadness, and surprise) on user perception. While listening to the fairytale with the participant, a humanoid robot (Pepper) responded to the story with a recorded emotional sound with a gesture. Participants showed significantly higher emotion recognition accuracy from the regular voice than from other sounds. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Regular voice also induced higher trust, naturalness, and preference compared to other sounds. Interestingly, musical sound mostly showed lower perceptions than no sound. A further exploratory study was conducted with an additional 49 young people to investigate the effect of regular non-verbal voices (female voices and male voices) and basic emotions (happiness, sadness, anger, and relief) on user perception. We also further explored the impact of participants' gender on emotion and social perception toward robot Pepper. While listening to a fairy tale with the participants, a humanoid robot (Pepper) responded to the story with gestures and emotional voices. Participants showed significantly higher emotion recognition accuracy and social perception from the voice + Gesture condition than Gesture only conditions. The confusion matrix showed that happiness and sadness had the highest emotion recognition accuracy, which aligns with the previous research. Interestingly, participants felt more discomfort and anthropomorphism in male voices compared to female voices. Male participants were more likely to feel uncomfortable when interacting with Pepper. In contrast, female participants were more likely to feel warm. However, the gender of the robot voice or the gender of the participant did not affect the accuracy of emotion recognition. Results are discussed with social robot design guidelines for emotional cues and future research directions.
- The Effects of Emotions on Trust in Human-Computer Interaction: A Survey and ProspectJeon, Myounghoon (Taylor & Francis, 2023-09-29)With the embodied interaction paradigm, research on human emotions has rapidly increased. In parallel, the advent of artificial intelligence and automated technologies has spurred research on trust towards interactive systems. However, little research has directly investigated the effects of emotions on trust in the context of technology use. The present paper surveyed empirical studies using the PRISMA framework. After briefly introducing emotional effects on cognitive processes, twenty-nine studies were systematically analyzed. In many papers positive emotions or empathically congruent systems led to higher trust. Some studies indicated that emotions can be a mediator between different factors and trust, whereas other studies showed only partial effects depending on different users or situations. Note that some research showed null effects or even negative effects (backfire effects) because emotional systems can be perceived as sarcastic or uncanny. In addition to the pervasive mood congruent effect or emotional contagion, various psychological mechanisms and theories were identified, such as entitativity, cognitive appraisal, or affect infusion model. Considerations for future design and research are discussed with results. This survey paper is expected to deepen the theoretical aspects of emotional effects on trust towards diverse technologies (robots, agents, or other interactive systems) and provide practical design directions.
- Effects of Language on Angry drivers' Situation Awareness, Driving Performance, and Subjective Perception in Level 3 Automated VehiclesMuhundan, Sushmethaa; Jeon, Myounghoon (Taylor & Francis, 2023-07-18)Research shows that anger has a negative impact on cognition due to the rumination effect and in the context of driving, anger negatively impacts situation awareness, driving performance, and road safety. In-vehicle agents are capable of mitigating the effects of anger and subsequent effects on driving behavior. Language is another important aspect that influences information processing and human behavior during social interactions. This study aimed to explore the effects of the language of in-vehicle agents on angry drivers’ situation awareness, driving performance, and subjective perception by conducting a within-subject driving simulator study. Twenty four young drivers drove three different laps in a level 3 automated vehicle with a native-language speaking agent (Hindi or Chinese), second-language speaking agent (English) and no agent. The results of this study are indicative of the importance of native language processing in the context of driving. The use of the participants’ native language resulted in improved driving performance and heightened situation awareness. The participants preferred the native language agent over the other conditions and also expressed the need to control the state of the in-vehicle agent. The study results and discussions have theoretical and practical design implications and are expected to help foster future work in this domain.
- The Effects of Robot Voices and Appearances on Users’ Emotion Recognition and Subjective PerceptionKo, Sangjin; Barnes, Jaclyn; Dong, Jiayuan; Park, Chunghyuk; Howard, Ayanna; Jeon, Myounghoon (World Scientific, 2023-02-22)As the influence of social robots in people's daily lives grows, research on understanding people's perception of robots including sociability, trust, acceptance, and preference becomes more pervasive. Research has considered visual, vocal, or tactile cues to express robots' emotions, whereas little research has provided a holistic view in examining the interactions among different factors influencing emotion perception. We investigated multiple facets of user perception on robots during a conversational task by varying the robots' voice types, appearances, and emotions. In our experiment, 20 participants interacted with two robots having four different voice types. While participants were reading fairy tales to the robot, the robot gave vocal feedback with seven emotions and the participants evaluated the robot's profiles through post surveys. The results indicate that (1) the accuracy of emotion perception differed depending on presented emotions, (2) a regular human voice showed higher user preferences and naturalness, (3) but a characterized voice was more appropriate for expressing emotions with significantly higher accuracy in emotion perception, and (4) participants showed significantly higher emotion recognition accuracy with the animal robot than the humanoid robot. A follow-up study (N=10) with voice-only conditions confirmed that the importance of embodiment. The results from this study could provide the guidelines needed to design social robots that consider emotional aspects in conversations between robots and users.
- The Effects of System Transparency and Reliability on Drivers' Perception and Performance Towards Intelligent Agents in Level 3 Automated VehiclesZang, Jing (Virginia Tech, 2023-07-05)In the context of automated vehicles, transparency of in-vehicle intelligent agents (IVIAs) is an important contributor to drivers' perception, situation awareness (SA), and driving performance. However, the effects of agent transparency on driver performance when the agent is unreliable have not been fully examined yet. The experiments in this Thesis focused on different aspects of IVIA's transparency, such as interaction modes and information levels, and explored their impact on drivers considering different system reliability. In Experiment 1, a 2 x 2 mixed factorial design was used in this study, with transparency (Push: proactive vs. Pull: on-demand) as a within-subjects variable and reliability (high vs. low) as a between-subjects variable. In a driving simulator, twenty-seven young drivers drove with two types of in-vehicle agents during Level 3 automated driving. Results suggested that participants generally preferred the Push-type agent, as it conveyed a sense of intelligence and competence. The high-reliability agent was associated with higher situation awareness and less workload, compared to the low-reliability agent. Although Experiment 1 explored the effects of transparency by changing the interaction mode and the accuracy of the information, a theoretical framework was not well outlined regarding how much information should be conveyed and how unreliable information influenced drivers. Thus, Experiment 2 further studied the transparency regrading information level, and the impact of reliability on its effect. A 3 x 2 mixed factorial design was used in this study, with transparency (T1, T2, T3) as a between-subject variable and reliability (high vs. low) as a within-subjects variable. Fifty-three participants were recruited. Results suggested that transparency influenced drivers' takeover time, lane keeping, and jerk. The high-reliability agent was associated with the higher perception of system accuracy and response speed, and longer takeover time than the low-reliability agent. Participants in T2 transparency showed higher cognitive trust, lower workload, and higher situation awareness only when system reliability was high. The results of this study may have significant effects on the ongoing creation and advancement of intelligent agent design in automated vehicles.
- The Effects of Transparency and Reliability of In-Vehicle Intelligent Agents on Driver Perception, Takeover Performance, Workload and Situation Awareness in Conditionally Automated VehiclesZang, Jing; Jeon, Myounghoon (MDPI, 2022-09-14)In the context of automated vehicles, transparency of in-vehicle intelligent agents (IVIAs) is an important contributor to driver perception, situation awareness (SA), and driving performance. However, the effects of agent transparency on driver performance when the agent is unreliable have not been fully examined yet. This paper examined how transparency and reliability of the IVIAs affect drivers’ perception of the agent, takeover performance, workload and SA. A 2 × 2 mixed factorial design was used in this study, with transparency (Push: proactive vs. Pull: on-demand) as a within-subjects variable and reliability (high vs. low) as a between-subjects variable. In a driving simulator, 27 young drivers drove with two types of in-vehicle agents during the conditionally automated driving. Results suggest that transparency influenced participants’ perception on the agent and perceived workload. High reliability agent was associated with higher situation awareness and less effort, compared to low reliability agent. There was an interaction effect between transparency and reliability on takeover performance. These findings could have important implications for the continued design and development of IVIAs of the automated vehicle system.
- Embodied Data Exploration in Immersive Environments: Application in Geophysical Data AnalysisSardana, Disha (Virginia Tech, 2023-06-05)Immersive analytics is an emerging field of data exploration and analysis in immersive environments. It is an active research area that explores human-centric approaches to data exploration and analysis based on the spatial arrangement and visualization of data elements in immersive 3D environments. The availability of immersive extended reality systems has increased tremendously recently, but it is still not as widely used as conventional 2D displays. In this dissertation, we described an immersive analysis system for spatiotemporal data and performed several user studies to measure the user performance in the developed system, and laid out design guidelines for an immersive analytics environment. In our first study, we compared the performance of users based on specific visual analytics tasks in an immersive environment and on a conventional 2D display. The approach was realized based on the coordinated multiple-views paradigm. We also designed an embodied interaction for the exploration of spatial time series data. The findings from the first user study showed that the developed system is more efficient in a real immersive environment than using it on a conventional 2D display. One of the important challenges we realized while designing an immersive analytics environment was to find the optimal placement and identification of various visual elements. In our second study, we explored the iterative design of the placement of visual elements and interaction with them based on frames of reference. Our iterative designs explored the impact of the visualization scale for three frames of reference and used the collected user feedback to compare the advantages and limitations of these three frames of reference. In our third study, we described an experiment that quantitatively and qualitatively investigated the use of sonification, i.e., conveying information through nonspeech audio, in an immersive environment that utilized empirical datasets obtained from a multi-dimensional geophysical system. We discovered that using event-based sonification in addition to the visual channel was extremely effective in identifying patterns and relationships in large, complex datasets. Our findings also imply that the inclusion of audio in an immersive analytics system may increase users’ level of confidence when performing analytics tasks like pattern recognition. We outlined the sound design principles for an immersive analytics environment using real-world geospace science datasets and assessed the benefits and drawbacks of using sonification in an immersive analytics setting.