The Influence of AR Head-Mounted Displays on Spatial Perception and Worker Response in Construction Training
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Construction job sites present significant risks that extend beyond physical hazards to include psychophysiological factors that shape workers' perceptions, attention, and decision-making. The mobility and influence of Augmented Reality Head-Mounted Displays (AR-HMDs) directly interact with these factors by altering how workers perceive their environment, process information, and manage sensory input. While AR-HMDs offer new opportunities for immersive, adaptive training in real-world construction scenarios, they also introduce cognitive and sensory risks that have been insufficiently explored in construction research. Now that AR-HMDs are deployed on construction sites, it's crucial to understand not only their technical performance but also the human-centered impacts on safety, perception, and decision-making. Existing studies have largely overlooked the psychophysiological constraints that influence safety outcomes; yet, understanding these responses is essential for both immediate task performance and long-term learning and risk-taking behaviors. To address these gaps, we develop theoretical constructs and framework variables to evaluate and predict spatiotemporal psychophysiological responses associated with AR-HMD use in construction training, specifically asking: What are the risks associated with AR-HMDs' spatial influence on perception in construction environments?
The framework is developed and tested through four objectives, progressing from a detailed scoping literature review and evaluation of AR-HMD impact on humans to psychophysiological prediction and median severity-of-impact classification using EEG location matrices. Objective 1 develops a conceptual model that classifies AR-HMD and Human-Computer Interaction (HCI) risks and standardizes the domain language for evaluating these technologies, highlighting underexplored cognitive, sensory, and physical human-factor risks. Additionally, objective 1 identifies and classifies spatial perception variables; develops a mental model of how workers perceive and spatially analyze immersive AR-HMD environments; examines embodiment, presence, and spatial presence; and finalizes the theoretical framework for empirical testing. Objective 2 tests the framework in a controlled environment using a full-scale passive and active haptic frame. A within-subjects design captures both psychological (survey-based) and physiological (EEG, heart rate) responses, which are analyzed using Power Spectral Density (PSD), Independent Component Analysis, and regression modeling to identify hemispheric differences and misalignments between perceived safety and actual psychophysiological responses. Objective 3 advances the framework into predictive modeling, using the same haptic-frame environment, deep learning models, including 2D CNN-LSTM sequence modeling and 3D CNN-LSTM architectures that are applied to predict temporal and cognitive state changes from 4D EEG input (frequency, amplitude, time, channels), extending the framework from measurement to prediction. Together, these three objectives show how conceptual modeling, spatial perception analysis, experimental validation, and predictive analytics can be systematically connected to evaluate AR-HMD situational risks. The outcomes reveal the extent of spatial and behavioral influences across key variables, supporting the development of likelihood and severity matrices for academia and industry, as outlined in objective 4.