VTechWorks staff will be away for the Thanksgiving holiday beginning at noon on Wednesday, November 27, through Friday, November 29. We will resume normal operations on Monday, December 2. Thank you for your patience.
 

Autonomous Mobile Robot Navigation in Dynamic Real-World Environments Without Maps With Zero-Shot Deep Reinforcement Learning

dc.contributor.authorSivashangaran, Shathushanen
dc.contributor.committeechairEskandarian, Azimen
dc.contributor.committeememberLosey, Dylan Patricken
dc.contributor.committeememberLeonessa, Alexanderen
dc.contributor.committeememberDoan, Thinh Thanhen
dc.contributor.departmentMechanical Engineeringen
dc.date.accessioned2024-06-05T08:03:26Zen
dc.date.available2024-06-05T08:03:26Zen
dc.date.issued2024-06-04en
dc.description.abstractOperation of Autonomous Mobile Robots (AMRs) of all forms that include wheeled ground vehicles, quadrupeds and humanoids in dynamically changing GPS denied environments without a-priori maps, exclusively using onboard sensors, is an unsolved problem that has potential to transform the economy, and vastly improve humanity's capabilities with improvements to agriculture, manufacturing, disaster response, military and space exploration. Conventional AMR automation approaches are modularized into perception, motion planning and control which is computationally inefficient, and requires explicit feature extraction and engineering, that inhibits generalization, and deployment at scale. Few works have focused on real-world end-to-end approaches that directly map sensor inputs to control outputs due to the large amount of well curated training data required for supervised Deep Learning (DL) which is time consuming and labor intensive to collect and label, and sample inefficiency and challenges to bridging the simulation to reality gap using Deep Reinforcement Learning (DRL). This dissertation presents a novel method to efficiently train DRL with significantly fewer samples in a constrained racetrack environment at physical limits in simulation, transferred zero-shot to the real-world for robust end-to-end AMR navigation. The representation learned in a compact parameter space with 2 fully connected layers with 64 nodes each is demonstrated to exhibit emergent behavior for Out-of-Distribution (OOD) generalization to navigation in new environments that include unstructured terrain without maps, dynamic obstacle avoidance, and navigation to objects of interest with vision input that encompass low light scenarios with the addition of a night vision camera. The learned policy outperforms conventional navigation algorithms while consuming a fraction of the computation resources, enabling execution on a range of AMR forms with varying embedded computer payloads.en
dc.description.abstractgeneralRobots with wheels or legs to move around environments improve humanity's capabilities in many applications such as agriculture, manufacturing, and space exploration. Reliable, robust mobile robots have the potential to significantly improve the economy. A key component of mobility is navigation to either explore the surrounding environment, or travel to a goal position or object of interest by avoiding stationary, and dynamic obstacles. This is a complex problem that has no reliable solution, which is one of the main reasons robots are not present everywhere, assisting people in various tasks. Past and current approaches involve first mapping an environment, then planning a collision-free path, and finally executing motor signals to traverse along the path. This has several limitations due to the lack of detailed pre-made maps, and inability to operate in previously unseen, dynamic environments. Furthermore, these modular methods require high computation resources due to the large number of calculations required for each step that prevents high real-time speed, and functionality in small robots with limited weight capacity for onboard computers, that are beneficial for reconnaissance, and exploration tasks. This dissertation presents a novel Artificial Intelligence (AI) method for robot navigation that is more computationally efficient than current approaches, with better performance. The AI model is trained to race in simulation at multiple times real-time speed for cost-effective, accelerated training, and transferred to a physical mobile robot where it retains its training experience, and generalizes to navigation in new environments without maps, with exploratory behavior, and dynamic obstacle avoidance capabilities.en
dc.description.degreeDoctor of Philosophyen
dc.format.mediumETDen
dc.identifier.othervt_gsexam:40946en
dc.identifier.urihttps://hdl.handle.net/10919/119290en
dc.language.isoenen
dc.publisherVirginia Techen
dc.rightsCreative Commons Attribution-ShareAlike 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by-sa/4.0/en
dc.subjectAutonomous Mobile Roboten
dc.subjectCognitive Navigationen
dc.subjectDeep Reinforcement Learningen
dc.subjectDynamic Obstacle Avoidanceen
dc.subjectUnstructured Terrainen
dc.titleAutonomous Mobile Robot Navigation in Dynamic Real-World Environments Without Maps With Zero-Shot Deep Reinforcement Learningen
dc.typeDissertationen
thesis.degree.disciplineMechanical Engineeringen
thesis.degree.grantorVirginia Polytechnic Institute and State Universityen
thesis.degree.leveldoctoralen
thesis.degree.nameDoctor of Philosophyen

Files

Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
Sivashangaran_S_D_2024.pdf
Size:
85.47 MB
Format:
Adobe Portable Document Format