Sterkenburg, JasonLandry, StevenFakhrHosseini, MaryamJeon, Myounghoon2024-01-222024-01-222023-09-141783-7677https://hdl.handle.net/10919/117560The number of visual distraction-caused crashes highlights a need for non-visual displays in the in-vehicle information system (IVIS). Audio-supported air gesture controls can tackle this problem. Twenty-four young drivers participated in our experiment using a driving simulator with six different gesture prototypes—3 modality types (visual-only, visual/auditory, and auditory-only) × 2 control orientation types (horizontal and vertical). Various data were obtained, including lane departures, eye glance behavior, secondary task performance, and driver workload. Results showed that the auditory-only displays showed a significantly lower lane departures and perceived workload. A tradeoff between eyes-on-road time and secondary task completion time for the auditory-only display was also observed, which means the safest, but slowest among the prototypes. Vertical controls (direct manipulation) showed significantly lower workload than horizontal controls (mouse metaphor), but did not differ in performance measures. Experimental results are discussed in the context of multiple resource theory and design guidelines for future implementation.Pages 215-23016 page(s)application/pdfenIn CopyrightAir gesture systemDirect manipulationDriver workloadDual task paradigmIn-vehicle Air Gesture Design: Impacts of Display Modality and Control OrientationArticle - RefereedJournal on Multimodal User Interfaceshttps://doi.org/10.1007/s12193-023-00415-8174Jeon, Myounghoon [0000-0003-2908-671X]1783-8738