Advancing In-vehicle Gesture Interactions with Adaptive Hand-Recognition and Auditory Displays
dc.contributor.author | Tabbarah, M. | en |
dc.contributor.author | Cao, Y. | en |
dc.contributor.author | Liu, Y. | en |
dc.contributor.author | Jeon, Myounghoon | en |
dc.date.accessioned | 2025-01-10T14:27:52Z | en |
dc.date.available | 2025-01-10T14:27:52Z | en |
dc.date.issued | 2021-09-09 | en |
dc.date.issued | 2021-09-22 | en |
dc.description.abstract | Competition for visual attention in vehicles has increased with the integration of touch-based interfaces, which has led to an increased crash risk. To mitigate this visual distraction, we designed an in-vehicle gesture-based menu system with different auditory feedback types and hand-recognition systems. We are conducting an experiment using a driving simulator where the participant performs a secondary task of selecting a menu item. Three auditory feedback types are tested in addition to the baseline condition (no audio): auditory icons, earcons, and spearcons. For each type of auditory display, two hand-recognition systems are tested: fixed and adaptive. We expect we can reduce the driver's secondary task workload, while minimizing off-road glances for safety. Our experiment would contribute to the existing literature in multimodal signal processing, confirming the Multiple Resource Theory. It would also present practical design guidelines for auditory-feedback for gesture-based in-vehicle interactions. | en |
dc.description.notes | Yes, abstract only (Peer reviewed?) | en |
dc.description.version | Published version | en |
dc.format.extent | Pages 204-206 | en |
dc.format.mimetype | application/pdf | en |
dc.identifier.doi | https://doi.org/10.1145/3473682.3481870 | en |
dc.identifier.isbn | 9781450386418 | en |
dc.identifier.orcid | Jeon, Myounghoon [0000-0003-2908-671X] | en |
dc.identifier.uri | https://hdl.handle.net/10919/124111 | en |
dc.language.iso | en | en |
dc.publisher | ACM | en |
dc.relation.uri | http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000767968200047&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=930d57c9ac61a043676db62af60056c1 | en |
dc.relation.uri | https://doi.org/10.1145/3473682.3481870 | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | HCI | en |
dc.subject | Gesture Interaction | en |
dc.subject | In-Vehicle | en |
dc.subject | Auditory Display | en |
dc.subject | Adaptive Recognition | en |
dc.title | Advancing In-vehicle Gesture Interactions with Adaptive Hand-Recognition and Auditory Displays | en |
dc.title.serial | Adjunct Proceedings - 13th International ACM Conference on Automotive User Interfaces and Interactive Vehicular Applications, AutomotiveUI 2021 | en |
dc.type | Conference proceeding | en |
dc.type.dcmitype | Text | en |
dc.type.other | Proceedings Paper | en |
dc.type.other | Meeting | en |
dc.type.other | Book | en |
pubs.finish-date | 2021-09-14 | en |
pubs.organisational-group | Virginia Tech | en |
pubs.organisational-group | Virginia Tech/Engineering | en |
pubs.organisational-group | Virginia Tech/Engineering/Industrial and Systems Engineering | en |
pubs.organisational-group | Virginia Tech/All T&R Faculty | en |
pubs.organisational-group | Virginia Tech/Engineering/COE T&R Faculty | en |
pubs.start-date | 2021-09-09 | en |