Lu, FeiyuXu, Yan2022-10-192022-10-192022-04-29http://hdl.handle.net/10919/112226Imagine in the future people comfortably wear augmented reality (AR) displays all day, how do we design interfaces that adapt to the contextual changes as people move around? In current operating systems, the majority of AR content defaults to staying at a fxed location until being manually moved by the users. However, this approach puts the burden of user interface (UI) transition solely on users. In this paper, we frst ran a bodystorming design workshop to capture the limitations of existing manual UI transition approaches in spatially diverse tasks. Then we addressed these limitations by designing and evaluating three UI transition mechanisms with different levels of automation and controllability (low-efort manual, semi-automated, fully-automated). Furthermore, we simulated imperfect contextual awareness by introducing prediction errors with diferent costs to correct them. Our results provide valuable lessons about the trade-ofs between UI automation levels, controllability, user agency, and the impact of prediction errors.application/pdfenCreative Commons Attribution 4.0 InternationalExploring Spatial UI Transition Mechanisms with Head-Worn Augmented RealityArticle - Refereed2022-10-19The author(s)https://doi.org/10.1145/3491102.3517723