Qin, Yimin2023-06-152023-06-152023-06-14vt_gsexam:37500http://hdl.handle.net/10919/115428Augmented Reality (AR) head-mounted display (HMD) provides users with an immersive virtual experience in the real world. The portability of this technology affords various information display options for construction workers that are not possible otherwise. The information delivered via an interactive user interface provides an innovative method to display complex building instructions, which is more intuitive and accessible compared with traditional paper documentations. However, there are still challenges hindering the practical usage of this technology at the construction jobsite. As a technical restriction, current AR HMD products have a limited field of view (FOV) compared to the human vision range. It leads to an uncertainty of how the obstructed view of display will affect construction workers' perception of hazards in their surrounding area. Similarly, the information displayed to workers requires rigorous testing and evaluation to make sure that it does not lead to information overload. Therefore, it is essential to comprehensively evaluate the impacts of using AR HMD from both perspectives of task performance and cognitive performance. This dissertation aims to bridge the gap in understanding the cognitive impacts of using AR HMD in construction assembly tasks. Specifically, it focuses on answering the following two questions: (1) How are task performance and cognitive skills affected by AR displays under complex working conditions? (2) How are moment-to-moment changes of mental workload captured and evaluated during construction assembly tasks? To answer these questions, this dissertation proposed two experiments. The first study tests two AR displays (conformal and tag-along) and paper instruction under complex working conditions, involving different framing scales and interference settings. Subjective responses are collected and analyzed to evaluate overall mental workload and situation awareness. The second study focuses on exploring an electroencephalogram (EEG) based approach for moment-to-moment capture and evaluation of mental workload. It uncovers the cognitive change on the time domain and provides room for further quantitative analyzing on mental workload. Especially, two frameworks of mental workload prediction are proposed by using (1) Long Short-Term Memory (LSTM) and (2) one-dimensional Convolutional Neural Network (1D CNN)-LSTM for forecasting EEG signal and, classifying task conditions and mental workload levels respectively. The approaches are tested to be effective and reliable for predicting and recognizing subjects' mental workload during assembly. In brief, this research contributes to the existing knowledge with an assessment of AR HMD use in construction assembly, including task performance evaluation and both subjective and physiological measurements for cognitive skills.ETDenIn CopyrightAugmented Reality (AR)Mental WorkloadHead-Mounted Display (HMD)Construction AssemblyElectroencephalogram (EEG)Deep LearningEvaluating Mental Workload for AR Head-Mounted Display Use in Construction Assembly TasksDissertation