Multi-Sensor Fusion and SLAM-Based Digital Twin Integration for Simulated Accessibility Assessments in Complex Architectural Environments
Files
TR Number
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
Ensuring accessibility in architectural environments remains a challenge, especially for visually impaired users who encounter subtle hazards like unmarked curbs, abrupt surface changes, and overhead obstructions that often go undetected. This paper introduces a simulation-based framework that detects and geolocates accessibility barriers using egocentric RGB video, GPS, and inertial data from AR glasses. Critical hazards are identified through monocular depth estimation, semantic segmentation, and 3D object detection, then anchored via Simultaneous Localization and Mapping (SLAM) trajectories and fused with OpenStreetMap data, digital models, and point clouds to improve spatial accuracy. Hazards are filtered for plausibility and consistency before being annotated and visualized in an interactive Rhino/Grasshopper-based digital twin. While the system can run on RGB and GPS data alone making it broadly sensor-agnostic and deployable on common mobile devices, SLAM data is integrated to review precision. Case studies show strong alignment with ground-truth conditions and robust integration with spatial simulation models for accessibility auditing.