Browsing by Author "Tabbarah, Moustafa"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
- Novel In-Vehicle Gesture Interactions: Design and Evaluation of Auditory Displays and Menu Generation InterfacesTabbarah, Moustafa (Virginia Tech, 2023-01-30)Driver distraction is a major contributor to car crashes, and visual distraction caused by using invehicle infotainment systems (IVIS) degrades driving performance and increases crash risk. Air gesture interfaces were developed to mitigate for driver distraction, and using auditory displays showed a decrease in off-road glances and an improved perceived workload. However, the design of auditory displays was not fully investigated. This thesis presents directional research in the design of auditory displays for air-gesture IVIS through two dual-task experiments of driving a simulator and air-gesture menu navigation. Experiment 1 with 32 participants employed a 2x4 mixed-model design, and explored the effect of four auditory display conditions (auditory icon, earcon, spearcon, and no-sound) and two menu-generation interfaces (fixed and adaptive) on driving performance, eye glance behavior, secondary task performance and subjective perception. Each auditory display (within-subjects) was tested with both a fixed and adaptive menu-generation interface (between-subjects). Results from Experiment 1 demonstrated that spearcon provided the least visual distraction, least workload, best system usability and was favored by participants; and that fixed menu generation outperformed adaptive menu generation in driving safety and secondary task performance. Experiment 2 with 24 participants utilized the best interface to emerge from Experiment 1 to further explore the auditory display with the most potential: spearcon. 70% spearcon and 40% spearcon were compared to text-to-speech (TTS) and no audio conditions. Results from Experiment 2 showed that 70% spearcon induced less visual distraction than 40% spearcon, and that 70% spearcon resulted in the most accurate but slowest secondary task selections. Experimental results are discussed in the context of the multiple resource theory and the working memory model, design guidelines are proposed, and future work is discussed.
- Novel In-Vehicle Gesture Interactions: Design and Evaluation of Auditory Displays and Menu Generation InterfacesTabbarah, Moustafa; Cao, Yusheng; Abu Shamat, Ahmad; Fang, Ziming; Li, Lingyu; Jeon, Myounghoon (ACM, 2023-09-18)Using in-vehicle infotainment systems degrades driving performance and increases crash risk. To address this, we developed air gesture interfaces using various auditory displays. Thirty-two participants drove a simulator with air-gesture menu navigation tasks. A 4x2 mixed-model design was used to explore the effects of auditory displays as a within-subjects variable (earcons, auditory icons, spearcons, and no-sound) and menu-generation interfaces as a between-subjects variable (fixed and adaptive) on driving performance, secondary task performance, eye glance, and user experience. The adaptive condition centered the menu around the user’s hand position at the moment of activation, whereas the fixed condition located the menu always at the same position. Results demonstrated that spearcons provided the least visual distraction, least workload, best system usability and was favored by participants; and that fixed menu generation outperformed adaptive menu generation in driving safety and secondary task performance. Findings will inform design guidelines for in-vehicle air-gesture interaction systems.
- Sonically-enhanced in-vehicle air gesture interactions: evaluation of different spearcon compression ratesTabbarah, Moustafa; Cao, Yusheng; Fang, Ziming; Li, Lingyu; Jeon, Myounghoon (Springer, 2024-09)Driver distraction is a major contributor to road vehicle crashes, and visual distraction caused by using in-vehicle infotainment systems (IVIS) degrades driving performance and increases crash risk. Air gesture interfaces have been developed to mitigate driver distraction, and using auditory displays showed a decrease in off-road glances and an improved perceived workload. However, the potential of auditory display was not fully investigated. The present paper presents directional research in the design of auditory displays for air-gesture IVIS through a dual-task experiment of driving a simulator and air-gesture menu navigation. Twenty-four participants utilized the in-vehicle air gesture interfaces while driving with four auditory display conditions (text-to-speech, 70% compressed spearcons, 40% compressed spearcson, and no sound). The results showed that 70% spearcon reduced visual distraction and increased menu navigation accuracy but with increased navigation time. 70% spearcon was most preferred by the participants. Driving performance or workload did not show any difference among the conditions. Implications are discussed with the design guidelines for future implementations.