Browsing by Author "Gabbard, Joseph L."
Now showing 1 - 20 of 65
Results Per Page
Sort Options
- Advanced Driver Assistance Systems and Older Drivers – Mobility, Perception, and SafetyLiang, Dan (Virginia Tech, 2023-10-25)The aging process is often accompanied by declines in one or more physical, vision, and/or cognitive abilities that may impact driving safety. As older drivers become more self-aware of these functional deficits, they have the tendency to engage in self-regulation practices, such as less driving and avoiding challenging driving situations. This tendency may gradually evolve to give up driving altogether. Advanced Driver Assistance Systems (ADAS) holds promise for improving older drivers' safety on the road as well as maintaining their mobility by compensating for declines in visual, cognitive, and physical capabilities. However, the perception of these technologies can influence the realization of these expected benefits. The overarching goal of this research is to understand and enhance the safety and mobility of older adults by examining the impact of ADAS. The dissertation addresses this goal by investigating mobility, perception, safety measures, and safety. Study 1 employed structure equation modeling (SEM) on the data from the Second Strategic Highway Research Program (SHRP 2) on driving habits with respect to age, gender, living status, health, and functioning capabilities. The results illustrate that older drivers' health is a reliable predictor of driving exposure, and cognitive and physical declines are predictive of their intention to reduce exposure and actual driving in challenging situations. These findings highlight that the aging population requires support for their mobility and likely road safety given their age-related impairments. Study 2 employed structure topic modeling on a focus group of older adults driving vehicles equipped with ADAS for six weeks was conducted to reveal five key issues to older drivers (in the order of prevalence): (1) safety, (2) confidence concerning ADAS, (3) ADAS functionality, (4) user interface/usability, and (5) non-ADAS related features. The findings point to a need for holistic ADAS design that not only must consider safety concerns but also user interfaces accommodating older adults' preferences and limitations as well as in-depth training programs to operate ADAS given the technology limitations. Study 3 employed correlation analysis and logistic regression on SHRP 2 data to reveal that the longitudinal deceleration events at greater than 0.60g and lateral acceleration events at greater than 0.40g appear most associated with older adults' driving risk and are predictive of near future crash and near-crashes (CNCs) occurrence and high-risk older drivers with acceptable accuracy. These findings indicate that high g-force events can be used to assess risk for older drivers, and the selection of thresholds should consider the characteristics of drivers. Study 4 compared high g-force events between two naturalistic driving studies to reveal that drivers who drove vehicles equipped with ADAS had lower longitudinal declaration rates, indicating the benefits of ADAS presence on older drivers' safety. When lane keeping assist (LKA) was engaged, lower high longitudinal deceleration was observed than when LKA was not engaged, indicating that older drivers tended to apply less aggressive braking when using LKA. Over several weeks of exposure to vehicles with ADAS presence, older drivers showed decreasing longitudinal deceleration but increasing lateral acceleration events. In other words, the potential of ADAS for positive safety-related impacts exists but some refinement in the design to reduce lateral events might be necessary.
- AR DriveSim: An Immersive Driving Simulator for Augmented Reality Head-Up Display ResearchGabbard, Joseph L.; Smith, Missie; Tanous, Kyle; Kim, Hyungil; Jonas, Bryan (Frontiers, 2019-10-23)Optical see-through automotive head-up displays (HUDs) are a form of augmented reality (AR) that is quickly gaining penetration into the consumer market. Despite increasing adoption, demand, and competition among manufacturers to deliver higher quality HUDs with increased fields of view, little work has been done to understand how best to design and assess AR HUD user interfaces, and how to quantify their effects on driver behavior, performance, and ultimately safety. This paper reports on a novel, low-cost, immersive driving simulator created using a myriad of custom hardware and software technologies specifically to examine basic and applied research questions related to AR HUDs usage when driving. We describe our experiences developing simulator hardware and software and detail a user study that examines driver performance, visual attention, and preferences using two AR navigation interfaces. Results suggest that conformal AR graphics may not be inherently better than other HUD interfaces. We include lessons learned from our simulator development experiences, results of the user study and conclude with limitations and future work.
- Assessment of the Effectiveness of Emergency Lighting, Retroreflective Markings, and Paint Color on Policing and Law Enforcement SafetyTerry, Travis N. (Virginia Tech, 2020-07-01)This project is an in-depth investigation on the impact of lighting, marking and paint schemes on the operational aspects of police vehicles. This investigation consisted of two phases that ultimately consisted of four experiments. An array of lighting and marking schemes were implemented on police vehicles in a variety of jurisdictions for evaluation. The study then investigated the change in the visibility of police officers, the public reaction to these schemes, and the operational impacts of these systems. The first phase of the project was a naturalistic observation study where the goal was to better understand how traffic behaved around traffic stops. Test vehicles were positioned in simulated traffic stops and patrol locations to determine how traffic behavior was affected by various configurations of police lighting and markings. Camera and radar systems were used to measure the changes in driver speed and when drivers responded to the move over law. Based on the results of the naturalistic studies, the impact of the lighting system on officer visibility was investigated in a controlled human factors test where the ability of a driver to see a police officer outside of their vehicle was measured in the presence of the lighting systems. The purpose of this interjected effort was to verify that the experimental schemes would not increase risk to law enforcement despite data from the first phase indicating the vehicles were more visible. A second part to that study evaluated conventional methods of bolstering an officer's visibility outside of their vehicle at night. The second phase took the findings of the first phase and implemented changes to several police vehicles from local and state agencies to be in operation for at least 18 months. This was to assess the rate of near-misses and crash rate to relate the vehicle changes to law enforcement safety. Additionally, rates of citations were assessed, and surveys offered an opportunity for law enforcement to provide their own feedback on the implementations. The lighting systems evaluated included a completely blue lighting system, an enhanced all blue lighting system with twice the light output, a red and blue system, and a single flashing blue beacon. In terms of markings, retroreflective markings along the side of the vehicle, a retroreflective contour line, chevrons on the rear of the vehicle and unmarked vehicles were evaluated. Finally, a variety of vehicle colors were used to investigate the impact of the base vehicle paint color. The results indicate that both the red and blue lighting system and the high output blue lighting system increase the distance at which drivers moved over significantly. In general, at least 95% of traffic attempted to merge away from an actively lighted police vehicle, when possible. In terms of the speed change, drivers began reducing their speed by approximately 600 m from the police vehicle. Similarly, the addition of retroreflectivity to the rear of the vehicle showed an additional benefit for causing drivers to move over sooner. However, these benefits came at a cost to the officer's visibility. When outside of their vehicle, the high output blue system significantly reduced officer detectability while the red and blue configuration only impacted detection distance by 3 meters. The investigation did find that these impacts could be overcome with retroreflective vests worn by the officers. In the second phase, a preference revealed by officers favored the red-blue configuration. They stated that this configuration provided greater comfort for them and less glare to approaching drivers. The study also revealed that the alternative configurations did not impact the operational activities of police authority.
- Assisting Spatial Referencing for Collaborative Augmented RealityLi, Yuan (Virginia Tech, 2022-05-27)Spatial referencing denotes the act of referring to a location or an object in space. Since it is often essential in different collaborative activities, good support for spatial referencing could lead to exceptional collaborative experience and performance. Augmented Reality (AR) aims to enhance daily activities and tasks in the real world, including various collaborations and social interactions. Good support for accurate and rapid spatial referencing in collaborative AR often requires detailed environment 3D information, which can be critical for the system to acquire as constrained by current technology. This dissertation seeks to address the issues related to spatial referencing in collaborative AR through 3D user interface design and different experiments. Specifically, we start with investigating the impact of poor spatial referencing on close-range, co-located AR collaborations. Next, we propose and evaluate different pointing ray techniques for object reference at a distance without knowledge from the physical environment. We further introduce marking techniques aiming to accurately acquire the position of an arbitrary point in 3D space that can be used for spatial referencing. Last, we provide a systematic assessment of an AR collaborative application that supports efficient spatial referencing in remote learning to demonstrate its benefit. Overall, the dissertation provides empirical evidence of spatial referencing challenges and benefits to collaborative AR and solutions to support adequate spatial referencing when model information from the environment is missing.
- Augmented Reality Pedestrian Collision Warning: An Ecological Approach to Driver Interface Design and EvaluationKim, Hyungil (Virginia Tech, 2017-10-17)Augmented reality (AR) has the potential to fundamentally change the way we interact with information. Direct perception of computer generated graphics atop physical reality can afford hands-free access to contextual information on the fly. However, as users must interact with both digital and physical information simultaneously, yesterday's approaches to interface design may not be sufficient to support the new way of interaction. Furthermore, the impacts of this novel technology on user experience and performance are not yet fully understood. Driving is one of many promising tasks that can benefit from AR, where conformal graphics strategically placed in the real-world can accurately guide drivers' attention to critical environmental elements. The ultimate purpose of this study is to reduce pedestrian accidents through design of driver interfaces that take advantage of AR head-up displays (HUD). For this purpose, this work aimed to (1) identify information requirements for pedestrian collision warning, (2) design AR driver interfaces, and (3) quantify effects of AR interfaces on driver performance and experience. Considering the dynamic nature of human-environment interaction in AR-supported driving, we took an ecological approach for interface design and evaluation, appreciating not only the user but also the environment. The requirement analysis examined environmental constraints imposed on the drivers' behavior, interface design translated those behavior-shaping constraints into perceptual forms of interface elements, and usability evaluations utilized naturalistic driving scenarios and tasks for better ecological validity. A novel AR driver interface for pedestrian collision warning, the virtual shadow, was proposed taking advantage of optical see-through HUDs. A series of usability evaluations in both a driving simulator and on an actual roadway showed that virtual shadow interface outperformed current pedestrian collision warning interfaces in guiding driver attention, increasing situation awareness, and improving task performance. Thus, this work has demonstrated the opportunity of incorporating an ecological approach into user interface design and evaluation for AR driving applications. This research provides both basic and practical contributions in human factors and AR by (1) providing empirical evidence furthering knowledge about driver experience and performance in AR, and, (2) extending traditional usability engineering methods for automotive AR interface design and evaluation.
- Calculating and Analyzing Angular Head Jerk in Augmented and Virtual Reality: Effect of AR Cue Design on Angular JerkVan Dam, Jared; Tanous, Kyle; Werner, Matt; Gabbard, Joseph L. (MDPI, 2021-10-28)In this work, we propose a convenient method for evaluating levels of angular jerk in augmented reality (AR) and virtual reality (VR). Jerk is a rarely analyzed metric in usability studies, although it can be measured and calculated easily with most head-worn displays and can yield highly relevant information to designers. Here, we developed and implemented a system capable of calculating and analyzing jerk in real-time based on orientation data from an off-the-shelf head-worn display. An experiment was then carried out to determine whether the presence of AR user interface annotations results in changes to users’ angular head jerk when conducting a time-pressured visual search task. Analysis of the data indicates that a decrease in jerk is significantly associated with the use of AR augmentations. As noted in the limitations section, however, the conclusions drawn from this work should be limited, as this analysis method is novel in the VR/AR space and because of methodological limitations that limited the reliability of the jerk data. The work presented herein considerably facilitates the use of jerk as a quick component measure of usability and serves as an initial point off which future research involving jerk in VR and AR can be performed.
- Characterizing Mental Workload in Physical Human-Robot Interaction Using Eye-Tracking MeasuresUpasani, Satyajit Abhay (Virginia Tech, 2023-07-06)Recent technological developments have ushered in an exciting era for collaborative robots (cobots), which can operate in close proximity with humans, sharing and supporting task goals. While there is increasing research on the biomechanical and ergonomic consequences of using cobots, there is relatively little work on the potential motor-cognitive demand associated with these devices. These cognitive demands primarily stem from the need to form accurate internal (mental) models of robot behavior, while also dealing with the intrinsic motor-cognitive demands of physical co-manipulation tasks, and visually monitoring the environment to ensure safe operation. The primary aim of this work was to investigate the viability of eye-tracking measures for characterizing mental workload during the use of cobots, while accounting for the potential effects of learning, task-type, expertise, and age-differences. While eye-tracking is gaining traction in surgical/rehabilitation robotics domains, systematic investigations of eye tracking for studying interactions with industrial cobots are currently lacking. We conducted three studies in which participants of different ages and expertise levels learned to perform upper- and lower-limb tasks using a dual-armed cobot and a whole-body powered exoskeleton respectively, over multiple trials. Robot-control difficulty was manipulated by changing the joint impedance on one of the robot arms (for the dual-armed cobot). The first study demonstrated that when individuals were learning to interact with a dual-armed cobot to perform an upper-limb co-manipulation task simulated in a virtual reality (VR) environment, pupil dilation (PD) and stationary gaze entropy (SGE) were the most sensitive and reliable measures of mental workload. A combination of eye-tracking measures predicted performance with greater accuracy than experimental task variables. Measures of visual attentional focus were more sensitive to task difficulty manipulations than typical eye-tracking workload measures, and PD was most sensitive to changes in workload over learning. The second study showed that compared to walking freely, walking while using a complex whole-body powered exoskeleton: a) increased PD of novices but not experts, b) led to reduced SGE in both groups and c) led to greater downward focused gaze (on the walking path) in experts compared to novices. In the third study using an upper-limb co-manipulation task similar to Study 1, we found that the PD of younger adults reduced at a faster rate over learning, compared to that of older adults, and older adults showed a significantly greater drop in gaze transition entropy with an increase in task difficulty, compared to younger adults. Also, PD was sensitive to learning and robot-difficulty but not environmental-complexity (collisions with objects in the task environment), and gaze-behavior measures were generally more sensitive to environmental-complexity. This research is the first to conduct a comprehensive analysis of mental workload in physical human-robot interaction using eye-tracking measures. PD was consistently found to show larger effects over learning, compared to task difficulty. Gaze-behavior measures quantifying visual attention towards environmental areas of interest were found to show relatively large effects of task difficulty and should continue to be explored in future research. While walking in a powered exoskeleton, both novices and experts exhibited compensatory gaze strategies. This finding highlights potentially persistent effects of using cobots on visual attention, with potential implications to safety and situational awareness. Older adults were found to apply greater mental effort (indicated by sustained PD) and followed more constrained gaze patterns in order to maintain similar levels of performance to younger adults. Perceived workload measures could not capture these age-differences, thus highlighting the advantages of eye-tracking measures. Lastly, the differential sensitivity of pupillary- and gaze behavior metrics to different types of task demands highlights the need for future research to employ both kinds of measures for evaluating pHRI. Important questions for future research are the potential sensitivity of eye-tracking workload measures over long-term adaptations to cobots, and the potential generalizability of eye-tracking measures to real-world (non-VR) tasks.
- Comparison of Augmented Reality Rearview and Radar Head-Up Displays for Increasing Spatial Awareness During Exoskeleton OperationHollister, Mark Andrew (Virginia Tech, 2024-03-19)Full-body powered exoskeletons for industrial workers have the potential to reduce the incidence of work-related musculoskeletal disorders while increasing strength beyond human capabilities. However, operating current full-body powered exoskeletons imposes different loading, motion, and balance requirements on users compared to unaided task performance, potentially resulting in additional mental workload on the user which may reduce situation awareness (SA) and increase risk of collision with pedestrians, negating the health and safety benefits of exoskeletons. Exoskeletons could be equipped with visual aids to improve SA, like rearview cameras or radar displays. However, research on design and evaluation of such displays for exoskeleton users are absent in the literature. This empirical study compared several augmented reality (AR) head-up displays (HUDs) in providing SA to minimize pedestrian collisions while completing common warehouse tasks. Specifically, the study consisted of an experimental factor of display abstraction including four levels, from low to high abstraction: rearview camera, overhead radar, ring radar, and no visual aid (as control). The second factor was elevation angle that was analyzed with the overhead and ring radar displays at 15°, 45°, and 90°. A 1x4 repeated measures ANOVA on all four display abstraction levels at 90° revealed that every display condition performed better than the no visual aid condition, the Bonferroni post-hoc test revealed that overhead and ring radars (medium and high abstraction respectively) received higher usability ratings than the rearview camera (low abstraction). A 2x3 repeated measures ANOVA on the two radar displays at all three display angles found that the overhead radar yielded better transport time and situation awareness ratings than the ring radar. Further, the two-way ANOVA found that 45° angles yielded the best transport collision times. Thus, AR displays presents promise in augment SA to minimize collision risk to collision and injury in warehouse settings.
- Developing an Augmented Reality Visual Clutter Score Through Establishing the Applicability of Image Analysis Measures of Clutter and the Analysis of Augmented Reality User Interface PropertiesFlittner, Jonathan Garth (Virginia Tech, 2023-09-05)Augmented reality (AR) is seeing a rapid expansion into several domains due to the proliferation of more accessible and powerful hardware. While augmented reality user interfaces (AR UIs) allow the presentation of information atop the real world, this extra visual data potentially comes at a cost of increasing the visual clutter of the users' field of view, which can increase visual search time, error rates, and have an overall negative effect on performance. Visual clutter has been studied for existing display technologies, but there are no established measures of visual clutter for AR UIs which precludes the study of the effects of clutter on performance in AR UIs. The first objective of this research is to determine the applicability of extant image analysis measures of feature congestion, edge density, and sub-band entropy for measuring visual clutter in the head-worn optical see-through AR space and establish a relationship between image analysis measures of clutter and visual search time. These image analysis measures are specifically chosen to quantify clutter, as they can be applied to complex and naturalistic scenes, as is common to experience while using an optical see-through AR UI. The second objective is to examine the effects of AR UIs comprised of multiple apparent depths on user performance through the metric of visual search time. The third objective is to determine the effects of other AR UI properties such as target clutter, target eccentricity, target apparent depth and target total distance on performance as measured through visual search time. These results will then be used to develop a visual clutter score, which will rate different AR UIs against each other. Image analysis measures for clutter of feature congestion, edge density, and sub-band entropy of clutter were correlated to visual search time when they were taken for the overall AR UI and when they were taken for a target object that a participant was searching for. In the case of an AR UI comprised of both projected and AR parts, image analysis measures were not correlated to visual search time for the constituent AR UI parts (projected or AR) but were still correlated to the overall AR UI clutter. Target eccentricity also had an effect on visual search time, while target apparent depth and target total distance from center did not. Target type and AR object percentage also had an effect on visual search time. These results were synthesized into a general model known as the "AR UI Visual Clutter Score Algorithm" using a multiple regression. This model can be used to compare different AR UIs to each other in order to identify the AR UI that is projected to have lower target visual search times.
- Development and Evaluation of an Assistive In-Vehicle System for Responding to Anxiety in Smart VehiclesNadri, Chihab (Virginia Tech, 2023-10-18)The integration of automated vehicle technology into our transportation infrastructure is ongoing, yet the precise timeline for the introduction of fully automated vehicles remains ambiguous. This technological transition necessitates the creation of in-vehicle displays tailored to emergent user needs and concerns. Notably, driving-induced anxiety, already a concern, is projected to assume greater significance in this context, although it remains inadequately researched. This dissertation sought to delve into the phenomenon of anxiety in driving, assess its implications in future transportation modalities, elucidate design considerations for distinct demographics like the youth and elderly, and design and evaluate an affective in-vehicle system to alleviate anxiety in automated driving through four studies. The first study involved two workshops with automotive experts, who underscored anxiety as pivotal to sustaining trust and system acceptance. The second study was a qualitative focus group analysis incorporating both young and older drivers, aiming to distill anxiety-inducing scenarios in automated driving and pinpoint potential intervention strategies and feedback modalities. This was followed by two driving simulator evaluations. The third study was observational, seeking to discern correlations among personality attributes, anxiety, and trust in automated driving systems. The fourth study employed cognitive reappraisal for anxiety reduction in automated driving. Analysis indicated the efficacy of the empathic interface leveraging cognitive reappraisal as an effective anxiety amelioration tool. Particularly in the self-efficacy reappraisal context, this influence influenced trust, user experience, and anxiety markers. Cumulatively, this dissertation provides key design guidelines for anxiety mitigation in automated driving, and highlights design elements pivotal to augmenting user experiences in scenarios where drivers relinquish vehicular control.
- Development and Human Factors Evaluation of a Portable Auditory Localization Acclimation Training SystemThompson, Brandon Scott (Virginia Tech, 2020-06-19)Auditory situation awareness (ASA) is essential for safety and survivability in military operations where many of the hazards are not immediately visible. Unfortunately, the Hearing Protection Devices (HPDs) required to operate in these environments can impede auditory localization performance. Promisingly, recent studies have exhibited the plasticity of the human auditory system by demonstrating that training can improve auditory localization ability while wearing HPDs, including military Tactical Communications and Protective Systems (TCAPS). As a result, the U.S. military identified the need for a portable system capable of imparting auditory localization acquisition skills at similar levels to those demonstrated in laboratory environments. The purpose of this investigation was to develop and validate a Portable Auditory Localization Acclimation Training (PALAT) system equipped with an improved training protocol against a proven laboratory grade system referred to as the DRILCOM system and subsequently evaluate the transfer-of-training benefit in a field environment. In Phase I, a systems decision process was used to develop a prototype PALAT system consisting of an expandable frame housing 32-loudspeakers operated by a user-controlled tablet computer capable of reproducing acoustically accurate localization cues similar to the DRILCOM system. Phase II used a within-subjects human factors experiment to validate whether the PALAT system could impart similar auditory localization training benefits as the DRILCOM system. Results showed no significant difference between the two localization training systems at each stage of training or in training rates for the open ear and with two TCAPS devices. The PALAT system also demonstrated the ability to detect differences in localization accuracy between listening conditions in the same manner as the DRILCOM system. Participant ratings indicated no perceived difference in localization training benefit but significantly preferred the PALAT system user interface which was specifically designed to improve usability features to meet requirements of a user operable system. The Phase III investigation evaluated the transfer-of-training benefit imparted by the PALAT system using a broadband stimulus to a field environment using gunshot stimulus. Training under the open ear and in-the-ear TCAPS resulted in significant differences between the trained and untrained groups from in-office pretest to in-field posttest.
- Development of Shared Situation Awareness Guidelines and Metrics as Developmental and Analytical Tools for Augmented and Virtual Reality User Interface Design in Human-Machine TeamsVan Dam, Jared Martindale Mccolskey (Virginia Tech, 2023-08-21)As the frontiers and futures of work evolve, humans and machines will begin to share a more cooperative working space where collaboration occurs freely amongst the constituent members. To this end, it is then necessary to determine how information should flow amongst team members to allow for the efficient sharing and accurate interpretation of information between humans and machines. Shared situation awareness (SSA), the degree to which individuals can access and interpret information from sources other than themselves, is a useful framework from which to build design guidelines for the aforementioned information exchange. In this work, we present initial Augmented/virtual reality (AR/VR) design principles for shared situation awareness that can help designers both (1) design efficacious interfaces based on these fundamental principles, and (2) evaluate the effectiveness of candidate interface designs based on measurement tools we created via a scoping literature review. This work achieves these goals with focused studies that 1) show the importance of SSA in augmented reality-supported tasks, 2) describe design guidelines and measurement tools necessary to support SSA, and 3) validate the guidelines and measurement tools with a targeted user study that employs an SSA-derived AR interface to confirm the guidelines distilled from the literature review.
- Distributed Situation Awareness Framework to Assess and Design Complex SystemsAlhaider, Abdulrahman Abdulqader (Virginia Tech, 2023-01-20)Communication and coordination in complex sociotechnical systems require continuous assessment on its artefacts and how they are utilized to improve system performance. Situation Awareness (SA) is considered as a fundamental concept in designing and understanding interactions between human and non-human agents (i.e., information systems) that impact system performance. The interaction efficiency is partly determined by quality of information or SA distributed across agents to ensure the accuracy of decision making and resource allocations. Disrupting SA distribution between agents can significantly affect operations of the system with financial and safety consequences. This research applied the Distributed Situation Awareness (DSA) theory to study and improve patient flow management. The main objective of this research was to advance methodology in the DSA literature for (1) deriving design implications from DSA models, and (2) developing quantitative DSA models to formally compare system designs. This DSA research was situated in the domain of patient flow management. Data were collected using the three-part method of data elicitation, extraction, and representation to investigate DSA at a patient flow command and control center at Carilion Clinic in Roanoke, VA. The data used were elicited from observations and interviews on workers daily activities and available historical database (i.e., TeleTracking). Then, data were represented into a combined network to highlight social, task and knowledge elements in patient flows for studying and assessing patient flow management. The influence of the DSA on complex systems was examined qualitatively and quantitatively. The DSA combined network qualitatively characterized patient flow management and identified deficiencies of the command-and-control center functions. The network characterized admission, clinical (inside-hospital) transportation, discharge, and environmental services functions managed by Carilion Transfer and Communications Center (CTaC). These characterizations led to the identification of design principles on job roles, tasks performed, and SA transactions and distribution adopted by the state-of-the-art patient flow management facility. In addition, the network representing the current operation of CTaC illustrated the connection between functional groups, arbitration of resources, and job roles that could become the bottlenecks in transmitting SA. The network also helped identify inefficient task loops, which resulted in delay due to missing/poor SA, and task orders that could be modified to improve the patient flow and thus reduce the likelihood of delay. The qualitative (i.e., combined network) model was partially translated into a quantitative model based on discrete event simulation (DES) and agent-based modeling (ABM) to simulate patient transportation inside the hospital. The simulation model consisted of 28 patient origins, 29 equipment origins, 12 destinations, and more than 200 entities (i.e., simulation objects). The model was validated by lack of significant difference on various outcome metrics between 100 simulation replications and historical data using one-way t-tests. The simulation model captured the distribution and transactions of knowledge elements between agents within the modeled processes. Further, the model successfully verified the deficiencies in the existing system (i.e., delay and cancelation), attributing various instances of deficiency to be either SA related or non-SA related. The simulation model tested two interventions for eliminating SA deficiencies revealed by the qualitative model: (1) updating the wards nurse before picking up patients from inpatient floor, and (2) updating the X-ray nurse/team before arriving with the patient. Both interventions involved updates from the transporters to nurses, transmitting SA on the estimated time of arrival and patient information for the nurse to become aware of the transport status. The simulation ran for 1500 replications for results on transport time and cancellation rate on these two interventions. One-way t-tests revealed that the intervention to update the wards nurse resulted in significant reductions in mean transport and cancellation rate time compared to historical data (i.e., TeleTracking), yielding 0.42 minutes to 1.24 minutes reduction in transport time and 2% to 5% less cancelations. However, the second intervention resulted in a significant increase in transport time and thus was ineffective. DES and ABM supplemented the qualitative modeling with quantitative evidence on DSA concepts and assessment of potential interventions for improving DSA in patient flow management. Specifically, the DES and ABM enabled comparison and prediction of performance outcome from recommended changes to communication protocols. These findings indicate that DSA is a promising framework for analyzing communication and coordination in complex systems and assessing improvement on SA design quantitatively.
- The Effect of Context Switching, Focal Switching Distance, Binocular and Monocular Viewing, and Transient Focal Blur on Human Performance in Optical See-Through Augmented RealityArefin, Mohammed S.; Phillips, Nate; Plopski, Alexander; Gabbard, Joseph L.; Swan, J. Edward (IEEE, 2022-01-01)In optical see-through augmented reality (AR), information is often distributed between real and virtual contexts, and often appears at different distances from the user. To integrate information, users must repeatedly switch context and change focal distance. If the user’s task is conducted under time pressure, they may attempt to integrate information while their eye is still changing focal distance, a phenomenon we term transient focal blur. Previously, Gabbard, Mehra, and Swan (2018) examined these issues, using a text-based visual search task on a one-eye optical see-through AR display. This paper reports an experiment that partially replicates and extends this task on a custom-built AR Haploscope. The experiment examined the effects of context switching, focal switching distance, binocular and monocular viewing, and transient focal blur on task performance and eye fatigue. Context switching increased eye fatigue but did not decrease performance. Increasing focal switching distance increased eye fatigue and decreased performance. Monocular viewing also increased eye fatigue and decreased performance. The transient focal blur effect resulted in additional performance decrements, and is an addition to knowledge about AR user interface design issues.
- Effectiveness of Vehicle External Communication Toward Improving Vulnerable Road User Safe Behaviors: Considerations for Legacy Vehicles to Automated Vehicles of the FutureRossi-Alvarez, Alexandria Ida (Virginia Tech, 2023-01-25)Automated vehicles (AVs) will be integrated into our society at some point in the future, but when is still up for debate. An extensive amount of research is being completed to understand the communication methods between AVs and other road users sharing the environment to prepare for this future. Currently, researchers are working to understand how different forms of external communication on the AVs will impact vulnerable road user (VRU) interaction. However, within the last 10 years, VRU casualty rates have continued to rise for all classifications of VRUs. Unfortunately, there is no suggestion that pedestrian fatality rates will ever decrease without some intervention. This dissertation aims at understanding the impacts of eHMI across real-world, complex scenarios with AVs and how researchers can apply those future findings to improve VRUs' judgments to today. A series of studies evaluated the necessity and impact of eHMI on AV–VRU interaction, assessed how the visual components of eHMI influenced VRU crossing decisions, and how variations in a real-world environment (multiple vehicles and scenario complexity) impact crossing decision behavior. Two studies examined how eHMI will impact future interactions between AVs and VRUs. Specifically, to understand how to advance the design of these future devices to avoid unintended consequences that may result. Results from these studies found that the presence and condition of eHMI did not influence participants' willingness to cross. Participants primarily relied on the speed and distance of the vehicle to make their crossing decision. It was difficult for participants to focus on the eHMI when multiple vehicles competed for their attention. Participants typically prioritized their focus on the vehicle that was nearest and most detrimental to their crossing path. Additionally, the type of scenario caused participants to make more cautious crossing decisions. However, it did not influence their willingness to cross. The last study applied the learnings from the first two studies to a foundational perception study for current legacy vehicles. These results showed a significant increase in judgment accuracies with a display. Through analysis across overall conclusions from the 3 studies, five critical findings were identified when addressing eHMI and 3 design recommendations, which are discussed in the penultimate section of this work. The results of this dissertation indicate that eHMI improved VRUs' accuracy of perception of change in vehicle speed. eHMI did not significantly impact VRUs crossing decisions. However, the complexity of the traffic scenarios affected the level of caution participants exhibited in their crossing behavior.
- Effects of a Driver Monitoring System on Driver Trust, Satisfaction, and Performance with an Automated Driving SystemVasquez, Holland Marie (Virginia Tech, 2016-01-27)This study was performed with the goal of delineating how drivers' interactions with an Automated Driving System were affected by a Driver Monitoring System (DMS), which provided alerts to the driver when he or she became inattentive to the driving environment. There were two specific research questions. The first was centered on addressing how drivers' trust and satisfaction with an Automated Driving System was affected by a DMS. The second was centered on addressing how drivers' abilities to detect changes in the driving environment that required intervention were affected by the presence of a DMS. Data were collected from fifty-six drivers during a test-track experiment with an Automated Driving System prototype that was equipped with a DMS. DMS attention prompt conditions were treated as the independent variable and trust, satisfaction, and driver performance during the experimenter triggered lane drifts were treated as dependent variables. The findings of this investigation suggested that drivers who receive attention prompts from a DMS have lower levels of trust and satisfaction with the Automated Driving System compared to drivers who do not receive attention prompts from a DMS. While the DMS may result in lower levels of trust and satisfaction, the DMS may help drivers detect changes in the driving environment that require attention. Specifically, drivers who received attention prompts after 7 consecutive seconds of inattention were 5 times more likely to react to a lane drift with no alert compared to drivers who did not receive attention prompts at all.
- Effects of Augmented Reality Head-up Display Graphics’ Perceptual Form on Driver Spatial Knowledge AcquisitionDe Oliveira Faria, Nayara (Virginia Tech, 2019-12-16)In this study, we investigated whether modifying augmented reality head-up display (AR HUD) graphics’ perceptual form influences spatial learning of the environment. We employed a 2x2 between-subjects design in which twenty-four participants were counterbalanced by gender. We used a fixed base, medium-fidelity driving simulator at the COGENT lab at Virginia Tech. Two different navigation cues systems were compared: world-relative and screen-relative. The world-relative condition placed an artificial post sign at the corner of an approaching intersection containing a real landmark. The screen-relative condition displayed turn directions using a screen-fixed traditional arrow located directly ahead of the participant on the right or left side on the HUD. We captured empirical data regarding changes in driving behaviors, glance behaviors, spatial knowledge acquisition (measured in terms of landmark and route knowledge), reported workload, and usability of the interface. Results showed that both screen-relative and world-relative AR head-up display interfaces have similar impact on the levels of spatial knowledge acquired; suggesting that world-relative AR graphics may be used for navigation with no comparative reduction in spatial knowledge acquisition. Even though our initial assumption that the conformal AR HUD interface would draw drivers’ attention to a specific part of the display was correct, this type of interface was not helpful to increase spatial knowledge acquisition. This finding contrasts a common perspective in the AR community that conformal, world-relative graphics are inherently more effective than screen-relative graphics. We suggest that simple, screen-fixed designs may indeed be effective in certain contexts. Finally, eye-tracking analyses showed fundamental differences in the way participants visually interacted with different AR HUD interfaces; with conformal-graphics demanding more visual attention from drivers. We showed that the distribution of visual attention allocation was that the world-relative condition was typically associated with fewer glances in total, but glances of longer duration.
- Effects of Intersection Lighting Design on Driver Visual Performance, Perceived Visibility, and GlareBhagavathula, Rajaram (Virginia Tech, 2016-01-12)Nighttime intersection crashes account for nearly half of all the intersection crashes, making them a major traffic safety concern. Although providing lighting at intersections has proven to be a successful countermeasure against these crashes, existing approaches to designing lighting at intersections are overly simplified. Current standards are based on recommending lighting levels, but do not account for the role of human vision or vehicle headlamps or the numerous pedestrian-vehicle conflict locations at intersections. For effective intersection lighting design, empirical evidence is required regarding the effects of lighting configuration (part of the intersection illuminated) and lighting levels on nighttime visibility. This research effort had three goals. The first was to identify an intersection lighting design that results in the best nighttime visibility. The second goal was to determine the effect of illuminance on visual performance at intersections. The third goal was to understand the relationships between object luminance, contrast, and visibility. To achieve these goals, three specific configurations were used, that illuminated the intersection approach (Approach), intersection box (Box), and both the intersection approach and box (Both). Each lighting configuration was evaluated under five levels of illumination. Visibility was assessed both objectively (visual performance) and subjectively (perceptions of visibility and glare). Illuminating the intersection box led to superior visual performance, higher perceived visibility, and lower perceived glare. For this same configuration, plateaus in visual performance and perceived visibility occurred between 8 and 12 lux illuminance levels. A photometric analysis revealed that the Box lighting configuration rendered targets in sufficient positive and negative contrasts to result in higher nighttime visibility. Negatively contrast targets aided visual performance, while for targets rendered in positive contrast visual performance was dependent on the magnitude of the contrast. The relationship between pedestrian contrast and perceived pedestrian visibility was more complex, as pedestrians were often rendered in multiple contrast polarities. These results indicate that Box illumination is an effective strategy to enhance nighttime visual performance and perceptions of visibility while reducing glare, and which may be an energy efficient solution as it requires fewer luminaires.
- The Effects of Text Drawing Styles, Background Textures, and Natural Lighting on Text Legibility in Outdoor Augmented RealityGabbard, Joseph L.; Swan, J. Edward; Hix, Deborah (MIT Press, 2006-02-01)A challenge in presenting augmenting information in outdoor augmented reality (AR) settings lies in the broad range of uncontrollable environmental conditions that may be present, specifically large-scale fluctuations in natural lighting and wide variations in likely backgrounds or objects in the scene. In this paper, we motivate the need for research on the effects of text drawing styles, outdoor background textures, and natural lighting on user performance in outdoor AR. We present a pilot study and a follow-on user-based study that examined the effects on user performance of outdoor background textures, changing outdoor illuminance values, and text drawing styles in a text identification task using an optical, see-through AR system. We report significant effects for all these variables, and discuss user interface design guidelines and ideas for future work.
- Effects of Volumetric Augmented Reality Displays on Human Depth Judgments: Implications for Heads-Up Displays in TransportationLisle, Lee; Merenda, Coleman; Tanous, Kyle; Kim, Hyungil; Gabbard, Joseph L.; Bowman, Douglas A. (IGI Global, 2019)Many driving scenarios involve correctly perceiving road elements in depth and manually responding as appropriate. Of late, augmented reality (AR) head-up displays (HUDs) have been explored to assist drivers in identifying road elements, by using a myriad of AR interface designs that include world-fixed graphics perceptually placed in the forward driving scene. Volumetric AR HUDs purportedly offer increased accuracy of distance perception through natural presentation of oculomotor cues as compared to traditional HUDs. In this article, the authors quantify participant performance matching virtual objects to real-world counterparts at egocentric distances of 7-12 meters while using both volumetric and fixed-focal plane AR HUDs. The authors found the volumetric HUD to be associated with faster and more accurate depth judgements at far distance, and that participants performed depth judgements more quickly as the experiment progressed. The authors observed no differences between the two displays in terms of reported simulator sickness or eye strain.