Sivashangaran, Shathushan2024-06-052024-06-052024-06-04vt_gsexam:40946https://hdl.handle.net/10919/119290Operation of Autonomous Mobile Robots (AMRs) of all forms that include wheeled ground vehicles, quadrupeds and humanoids in dynamically changing GPS denied environments without a-priori maps, exclusively using onboard sensors, is an unsolved problem that has potential to transform the economy, and vastly improve humanity's capabilities with improvements to agriculture, manufacturing, disaster response, military and space exploration. Conventional AMR automation approaches are modularized into perception, motion planning and control which is computationally inefficient, and requires explicit feature extraction and engineering, that inhibits generalization, and deployment at scale. Few works have focused on real-world end-to-end approaches that directly map sensor inputs to control outputs due to the large amount of well curated training data required for supervised Deep Learning (DL) which is time consuming and labor intensive to collect and label, and sample inefficiency and challenges to bridging the simulation to reality gap using Deep Reinforcement Learning (DRL). This dissertation presents a novel method to efficiently train DRL with significantly fewer samples in a constrained racetrack environment at physical limits in simulation, transferred zero-shot to the real-world for robust end-to-end AMR navigation. The representation learned in a compact parameter space with 2 fully connected layers with 64 nodes each is demonstrated to exhibit emergent behavior for Out-of-Distribution (OOD) generalization to navigation in new environments that include unstructured terrain without maps, dynamic obstacle avoidance, and navigation to objects of interest with vision input that encompass low light scenarios with the addition of a night vision camera. The learned policy outperforms conventional navigation algorithms while consuming a fraction of the computation resources, enabling execution on a range of AMR forms with varying embedded computer payloads.ETDenCreative Commons Attribution-ShareAlike 4.0 InternationalAutonomous Mobile RobotCognitive NavigationDeep Reinforcement LearningDynamic Obstacle AvoidanceUnstructured TerrainAutonomous Mobile Robot Navigation in Dynamic Real-World Environments Without Maps With Zero-Shot Deep Reinforcement LearningDissertation