Browsing by Author "Nadri, Chihab"
Now showing 1 - 15 of 15
Results Per Page
Sort Options
- Development and Evaluation of an Assistive In-Vehicle System for Responding to Anxiety in Smart VehiclesNadri, Chihab (Virginia Tech, 2023-10-18)The integration of automated vehicle technology into our transportation infrastructure is ongoing, yet the precise timeline for the introduction of fully automated vehicles remains ambiguous. This technological transition necessitates the creation of in-vehicle displays tailored to emergent user needs and concerns. Notably, driving-induced anxiety, already a concern, is projected to assume greater significance in this context, although it remains inadequately researched. This dissertation sought to delve into the phenomenon of anxiety in driving, assess its implications in future transportation modalities, elucidate design considerations for distinct demographics like the youth and elderly, and design and evaluate an affective in-vehicle system to alleviate anxiety in automated driving through four studies. The first study involved two workshops with automotive experts, who underscored anxiety as pivotal to sustaining trust and system acceptance. The second study was a qualitative focus group analysis incorporating both young and older drivers, aiming to distill anxiety-inducing scenarios in automated driving and pinpoint potential intervention strategies and feedback modalities. This was followed by two driving simulator evaluations. The third study was observational, seeking to discern correlations among personality attributes, anxiety, and trust in automated driving systems. The fourth study employed cognitive reappraisal for anxiety reduction in automated driving. Analysis indicated the efficacy of the empathic interface leveraging cognitive reappraisal as an effective anxiety amelioration tool. Particularly in the self-efficacy reappraisal context, this influence influenced trust, user experience, and anxiety markers. Cumulatively, this dissertation provides key design guidelines for anxiety mitigation in automated driving, and highlights design elements pivotal to augmenting user experiences in scenarios where drivers relinquish vehicular control.
- Emotion GaRage Vol. III: A Workshop on Affective In-Vehicle Display ApplicationsNadri, Chihab; Dong, Jiayuan; Li, Jingyi; Alvarez, Ignacio; Jeon, Myounghoon (ACM, 2022)Empathic in-vehicle interfaces can address driver affect and mitigate decreases in driving performance and behavior that are associated with emotional states. Empathic vehicles can detect and employ a variety of intervention modalities to change user affect and improve user experience. Challenges remain in the implementation of such strategies, as a broader established view of practical intervention modalities and strategies is still absent. Therefore, we propose a workshop that aims to bring together researchers and practitioners interested in affective interfaces and in-vehicle technologies as a forum for the development of displays and alternatives suitable to various use case situations in current and future vehicle states. During the workshop, we will focus on a common set of use cases and generate approaches that can suit different user groups. By the end of this workshop, researchers will create a design flowchart for in-vehicle affective display designers when creating displays for an empathic vehicle.
- Emotion GaRage Vol. IV: Creating Empathic In-Vehicle Interfaces with Generative AIs for Automated Vehicle ContextsChoe, Mungyeong; Bosch, Esther; Dong, Jiayuan; Alvarez, Ignacio; Oehl, Michael; Jallais, Christophe; Alsaid, Areen; Nadri, Chihab; Jeon, Myounghoon (ACM, 2023-09-18)This workshop aims to design advanced empathic user interfaces for in-vehicle displays, particularly for high-level automated vehicles (SAE level 3 or higher). Incorporating model-based approaches for understanding human emotion regulation, it seeks to enhance the user-vehicle interaction. A unique aspect of this workshop is the integration of generative artificial intelligence (AI) tools in the design process. The workshop will explore generative AI’s potential in crafting contextual responses and its impact on user experience and interface design. The agenda includes brainstorming on various driving scenarios, developing emotion-oriented intervention methods, and rapid prototyping with AI tools. The anticipated outcome includes practical prototypes of affective user interfaces and insights on the role of AI in designing human-machine interactions. Through this workshop, we hope to contribute to making automated driving more accessible and enjoyable.
- Empathic vehicle design: Use cases and design directions from two workshopsNadri, Chihab; Alvarez, Ignacio; Bosch, Esther; Oehl, Michael; Braun, Michael; Healey, Jennifer; Jallais, Christophe; Ju, Wendy; Li, Jingyi; Jeon, Myounghoon (ACM, 2022-04-27)Empathic vehicles are expected to improve user experience in automated vehicles and to help increase user acceptance of technology. However, little is known about potential real-world implementations and designs using empathic interfaces in vehicles with higher levels of automation. Given advances in affect detection and emotion mitigation, we conducted two workshops (N1 =24, N2 = 22, Ntotal = 46) on the design of empathic vehicles and their potential utility in a variety of applications. This paper recapitulates key opportunities in the design and application of empathetic interfaces in automated vehicles which emerged from the two workshops hosted at the ACM AutoUI conferences.
- Exploring User Needs and Design Requirements in Fully Automated VehiclesLee, Seul Chan; Nadri, Chihab; Sanghavi, Harsh; Jeon, Myounghoon (ACM, 2020)An automated driving system is expected to pave the way for a new area of user experience in a vehicle. However, few studies have been conducted on the understanding what people want to do and how the vehicle can support user needs, specifically, in level 5, fully automated vehicles (FAVs). Therefore, the present study aimed at exploring user needs and design requirements for potential activities in FAVs. We conducted expert interviews and focus group interviews to collect data, and the qualitative analysis was applied to elicit user needs and design requirements. Twelve user needs and general design considerations in four categories were found. The findings will contribute to enhancing user experience in future FAVs by considering user needs and design requirements we elicited.
- From Visual Art to Music: Sonification Can Adapt to Painting Styles and Augment User ExperienceNadri, Chihab; Anaya, Chairunisa; Yuan, Shan; Jeon, Myounghoon (Taylor & Francis, 2022-07-01)Advances in the fields of data processing and sonification have been applied to transcribe a variety of visual experiences into an auditory format. Although image sonification examples exist, the application of these principles to visual art has not been examined thoroughly. We sought to develop and evaluate a set of guidelines for the sonification of visual artworks. Through conducting expert interviews (N = 11), we created an initial sonification algorithm that accounts for art style, lightness, and color diversity to modulate the sonified output in terms of tempo and pitch. This algorithm was evaluated through user evaluations (N = 22). User study responses supported expert interview findings, the notion that sonification can be designed to match the experience of viewing an artwork, and showed interesting interaction effects among art styles, visual components, and musical parameters. We suggest the proposed guidelines can augment visitor experiences at art exhibits and provide the basis for further experimentation.
- Improving Safety At Highway-Rail Grade Crossings Using In-Vehicle Auditory AlertsNadri, Chihab; Lautala, Pasi; Veinott, Elizabeth; Mamun, Tauseef Ibn; Dam, Abhraneil; Jeon, Myounghoon (ACM, 2023-09-18)Despite increased use of lights, gates, and other active warning devices, crashes still happen at Highway-Rail Grade Crossings (HRGCs). To improve safety at HRGCs, we designed an in-vehicle auditory alert (IVAA) and conducted a multi-site driving simulator study to evaluate the effect of the IVAA on driving behavior at HRGCs. The video shows results of the collaboration between Virginia Tech, Michigan Tech, and the Volpe National Transportation Center recruited a total of N = 72 younger drivers. Driver simulator testing showed that the IVAA improved driving behavior near HRGCs, improving gaze behavior at HRGCs. Drivers looked both ways at crossings more often when the IVAA was present. We expect to run additional tests to further improve the IVAA. Our study can contribute to research efforts targeting driving safety at HRGCs.
- Introduction of a computational modelling approach to auditory display research: Case studies using the QN-MHP frameworkJeon, Myounghoon; Nadri, Chihab; Zhang, Yiqi (International Community for Auditory Display, 2021-06-28)For more than two decades, a myriad of design and research methods have been proposed in the ICAD community. Neurological methods have been presented since the inception of ICAD, and psychological human-subjects research has become as a legitimate approach to auditory display design and evaluation. However, little research has been conducted on modelling approaches to formalize human behavior in response to auditory displays. To bridge this gap, the present paper introduces computational modelling in auditory displays using the Queuing Network-Model Human Processor (QN-MHP) framework. After delineating the advantages of computational modelling and the QN-MHP framework, the paper introduces four case studies, which modelled drivers’ behavior in response to in-vehicle auditory warnings, followed by the implications and future work. We hope that this paper can spark lively discussions on computational modelling in the ICAD community and thus, more researchers can benefit from using this method for future research.
- Investigating the effect of earcon and speech variables on hybrid auditory alerts at rail crossingsNadri, Chihab; Lee, Seulchan; Kekal, Siddhant; Li, Yinjia; Li, Xuan; Lautala, Pasi; Nelson, David; Jeon, Myounghoon (International Community for Auditory Display, 2021-06-26)Despite rail industry advances in reducing accidents at Highway Rail Grade Crossings (HRGCs), train-vehicle collisions continue to happen. The use of auditory displays has been suggested as a countermeasure to improve driver behavior at HRGCs, with prior research recommending the use of hybrid sound alerts consisting of earcons and speech messages. In this study, we sought to further investigate the effect of auditory variables in hybrid sound alerts. Nine participants were recruited and instructed to evaluate 18 variations of a hybrid In-Vehicle Auditory Alert (IVAA) along 11 subjective ratings. Results showed that earcon speed and pitch contour design can change user perception of the hybrid IVAA. Results further indicated the influence of speech gender and other semantic variables on user assessment of HRGC IVAAs. Findings of the current study can also inform and instruct the design of appropriate hybrid IVAAs for HRGCs.
- Neurodivergence in Sound: Sonification as a Tool for Mental Health AwarenessNadri, Chihab; Al Matar, Hamza; Morrison, Spencer; Tiemann, Allison; Song, Inuk; Lee, Tae Ho; Jeon, Myounghoon (International Community for Auditory Display, 2023-06)The need to build greater mental health awareness as an important factor in decreasing stigma surrounding individuals with neurodivergent conditions has led to the development of programs and activities that seek to increase mental health awareness. Using a sonification approach with neural activity can effectively convey an individual’s psychological and mental characteristics in a simple and intuitive manner. In this study, we developed a sonification algorithm that alters existing music clips according to fMRI data corresponding to the salience network activity from neurotypical and neurodivergent individuals with schizophrenia. We conducted an evaluation of these sonifications with 24 participants. Results indicate that participants were able to differentiate between sound clips stemming from different neurological conditions and that participants gained increased awareness of schizophrenia through this brief intervention. Findings indicate sonification could be an effective tool in raising mental health awareness and relate neurodivergence to a neurotypical audience.
- "Play Your Anger": A Report on the Empathic In-vehicle Interface WorkshopDong, Jiayuan; Nadri, Chihab; Alvarez, Ignacio; Diels, Cyriel; Lee, Myeongkyu; Li, Jingyi; Liao, Pei Hsuan; Manger, Carina; Sadeghian, Shadan; Schuß, Martina; Walker, Bruce N.; Walker, Francesco; Wang, Yiyuan; Jeon, Myounghoon (ACM, 2023-09-18)Empathic in-vehicle interfaces are critical in improving user safety and experiences. There has been much research on how to estimate drivers’ affective states, whereas little research has investigated intervention methods that mitigate potential impacts from the driver’s affective states on their driving performance and user experiences. To enhance the development of in-vehicle interfaces considering emotional aspects, we have organized a workshop series to gather automotive user interface experts to discuss this topic at the International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutoUI). The present paper focuses particularly on the intervention methods created by the experts and proposes design recommendations for future empathic in-vehicle interfaces. We hope this work can spark lively discussions on the importance of drivers’ affective states in their user experience of automated vehicles and pose the right direction.
- Preliminary Guidelines on the Sonification of Visual Artworks: Linking Music, Sonification & Visual ArtsNadri, Chihab; Anaya, Chairunisa; Yuan, Shan; Jeon, Myounghoon (Department of Computer and Information Sciences, Northumbria University, 2019-06)Sonification and data processing algorithms have advanced over the years to reach practical applications in our everyday life. Similarly, image processing techniques have improved over time. While a number of image sonification methods have already been developed, few have delved into potential synergies through the combined use of multiple data and image processing techniques. Additionally, little has been done on the use of image sonification for artworks, as most research has been focused on the transcription of visual data for people with visual impairments. Our goal is to sonify paintings reflecting their art style and genre to improve the experience of both sighted and visually impaired individuals. To this end, we have designed initial sonifications for paintings of abstractionism and realism, and conducted interviews with visual and auditory experts to improve our mappings. We believe the recommendations and design directions we have received will help develop a multidimensional sonification algorithm that can better transcribe visual art into appropriate music.
- Robot-Theater Programs for Different Age Groups to Different Age Groups to Promote STEAM Education and Robotics ResearchKo, Sangjin; Swaim, Haley; Sanghavi, Harsh; Dong, Jiayuan; Nadri, Chihab; Jeon, Myounghoon (ACM, 2020)Issues with learner engagement and interest in STEM and robotics fields may be resolved from an interdisciplinary education approach. We have developed a robot-theater framework using interactive robots as a way to integrate the arts into STEM and robotics learning. The present paper shows a breadth of our target populations, ranging from elementary school children to college students. Based on our experiences in conducting four different programs applied to different age groups, we discuss the characteristics, lessons, design considerations, and implications for future work.
- "Slow down. Rail crossing ahead. Look left and right at the crossing": In-vehicle auditory alerts improve driver behavior at rail crossingsNadri, Chihab; Kekal, Siddhant; Li, Yinjia; Li, Xuan; Lee, Seul Chan; Nelson, David; Lautala, Pasi; Jeon, Myounghoon (Elsevier, 2022-09-27)Even though the rail industry has made great strides in reducing accidents at crossings, train-vehicle collisions at Highway-Rail Grade Crossings (HRGCs) continue to be a major issue in the US and across the world. In this research, we conducted a driving simulator study (N = 35) to evaluate a hybrid in-vehicle auditory alert (IVAA), composed of both speech and non-speech components, that was selected after two rounds of subjective evaluation studies. Participants drove through a simulated scenario and reacted to HRGCs with and without the IVAA present and through different music conditions and crossing devices. Driver simulator testing results showed that the inclusion of the hybrid IVAA significantly improved driving behavior near HRGCs in terms of gaze behavior, braking reaction, and approach speed to the crossing. The driving simulator study also showed the effects of background music and warning device types on driving performance. The study contributes to the large-scale implementation of IVAAs at HRGCs, as well as the development of guidelines toward a more standardized approach for IVAAs at HRGCs.
- Sonification Use Cases in Highly Automated Vehicles: Designing and Evaluating Use Cases in Level 4 AutomationNadri, Chihab; Ko, Sangjin; Diggs, Colin; Winters, Michael; Vattakkandy, Sreehari; Jeon, Myounghoon (Taylor & Francis, 2024-06-17)The introduction of highly automated driving systems is expected to significantly change in-vehicle interactions, creating opportunities for the design of novel use cases and interactions for occupants. In this study, we sought to identify and extract these novel use cases and determine preliminary auditory display recommendations for these novel situations. We developed and generated use cases for level 4 automated vehicles through an expert workshop (N = 17) and online focus group interviews (N = 12). Most of the use cases we generated were then tested, apart from meditation, and user opinions were collected in a driving simulator study (N = 20). Results indicated participants were interested in functions that support their experience with both driving and non-driving related interactions in highly automated vehicles. Three categories of use cases for level 4 automated vehicles were developed: driving automation use cases, immersion use cases, and in-vehicle notification use cases. For the driving simulator study, we tested three display modalities for interaction with drivers: visual alert only, non-speech with visual, and speech with visual. In terms of situation awareness (SA), the non-speech with visual display was associated with significantly better SA for the use case consisting of a planned increase in automation level than the speech-with visual display. This study will provide guidance on sonification design to advance user experiences in highly automated vehicles.