Browsing by Author "Li, Lingyu"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
- Head-up Displays Improve Drivers' Performance and Subjective Perceptions with the In-Vehicle Gesture Interaction SystemCao, Yusheng; Li, Lingyu; Yuan, Jiehao; Jeon, Myounghoon (Taylor & Francis, 2024-01-01)In-vehicle infotainment systems can cause various distractions, increasing the risk of car accidents. To address this problem, mid-air gesture systems have been introduced. This study investigated the potential of a novel interface that integrates a Head-Up Display (HUD) with auditory displays (spearcons: compressed speech) in a gesture-based menu navigation system to minimize visual distraction and improve driving and secondary task performance. The experiment involved 24 participants who navigated through 12 menu items using mid-air gestures while driving on a simulated road under four conditions: HUD (with, without spearcons) and Head-Down Display (HDD) (with, without spearcons). Results showed that the HUD condition significantly outperformed the HDD condition in participants’ level 1 situation awareness, perceived workload, menu navigation performance, and system usability. However, there were trade-offs on visual fixation duration on the menu, and lane deviation. These findings will guide future research in developing safer and more effective HUD-supported in-vehicle gesture interaction systems.
- Increasing Driving Safety and In-Vehicle Gesture-Based Menu Navigation Accuracy with a Heads-up DisplayCao, Yusheng; Li, Lingyu; Yuan, Jiehao; Jeon, Myounghoon (ACM, 2022)More and more novel functions are being integrated into the vehicle infotainment system to allow individuals to perform secondary tasks with high accuracy and low accident risks. Mid-air gesture interactions are one of them. The current paper will present novel designs to solve a specific issue with this method of interaction: visual distraction within the car. In this study, a Heads-up display (HUD) will be integrated with a gesture-based menu navigation system to allow drivers to see menu selections without looking away from the road. An experiment will be conducted to investigate the potential of this system in improving drivers' overall safety and gesture interaction accuracy. The experiment will recruit 24 participants to test the system. Participants will provide subjective feedback about the directions for conducting future research and improving the overall experience, as well as objective performance data.
- Novel In-Vehicle Gesture Interactions: Design and Evaluation of Auditory Displays and Menu Generation InterfacesTabbarah, Moustafa; Cao, Yusheng; Abu Shamat, Ahmad; Fang, Ziming; Li, Lingyu; Jeon, Myounghoon (ACM, 2023-09-18)Using in-vehicle infotainment systems degrades driving performance and increases crash risk. To address this, we developed air gesture interfaces using various auditory displays. Thirty-two participants drove a simulator with air-gesture menu navigation tasks. A 4x2 mixed-model design was used to explore the effects of auditory displays as a within-subjects variable (earcons, auditory icons, spearcons, and no-sound) and menu-generation interfaces as a between-subjects variable (fixed and adaptive) on driving performance, secondary task performance, eye glance, and user experience. The adaptive condition centered the menu around the user’s hand position at the moment of activation, whereas the fixed condition located the menu always at the same position. Results demonstrated that spearcons provided the least visual distraction, least workload, best system usability and was favored by participants; and that fixed menu generation outperformed adaptive menu generation in driving safety and secondary task performance. Findings will inform design guidelines for in-vehicle air-gesture interaction systems.
- Sonically-enhanced in-vehicle air gesture interactions: evaluation of different spearcon compression ratesTabbarah, Moustafa; Cao, Yusheng; Fang, Ziming; Li, Lingyu; Jeon, Myounghoon (Springer, 2024-09)Driver distraction is a major contributor to road vehicle crashes, and visual distraction caused by using in-vehicle infotainment systems (IVIS) degrades driving performance and increases crash risk. Air gesture interfaces have been developed to mitigate driver distraction, and using auditory displays showed a decrease in off-road glances and an improved perceived workload. However, the potential of auditory display was not fully investigated. The present paper presents directional research in the design of auditory displays for air-gesture IVIS through a dual-task experiment of driving a simulator and air-gesture menu navigation. Twenty-four participants utilized the in-vehicle air gesture interfaces while driving with four auditory display conditions (text-to-speech, 70% compressed spearcons, 40% compressed spearcson, and no sound). The results showed that 70% spearcon reduced visual distraction and increased menu navigation accuracy but with increased navigation time. 70% spearcon was most preferred by the participants. Driving performance or workload did not show any difference among the conditions. Implications are discussed with the design guidelines for future implementations.