Collaborative Path Planning and Control for Ground Agents Via Photography Collected by Unmanned Aerial Vehicles
dc.contributor.author | Wood, Sami Warren | en |
dc.contributor.committeechair | Williams, Ryan K. | en |
dc.contributor.committeemember | Abbott, A. Lynn | en |
dc.contributor.committeemember | Doyle, Daniel Drayson | en |
dc.contributor.committeemember | Kochersberger, Kevin Bruce | en |
dc.contributor.department | Electrical and Computer Engineering | en |
dc.date.accessioned | 2022-06-25T08:00:36Z | en |
dc.date.available | 2022-06-25T08:00:36Z | en |
dc.date.issued | 2022-06-24 | en |
dc.description.abstract | Natural disasters damage infrastructure and create significant obstacles to humanitarian aid efforts. Roads may become unusable, hindering or halting efforts to provide food, water, shelter, and life-saving emergency care. Finding a safe route during a disaster is especially difficult because as the disaster unfolds, the usability of roads and other infrastructure can change quickly, rendering most navigation services useless. With the proliferation of cheap cameras and unmanned aerial vehicles [UAVs], the rapid collection of aerial data after a natural disaster has become increasingly common. This data can be used to quickly appraise the damage to critical infrastructure, which can help solve navigational and logistical problems that may arise after the disaster. This work focuses on a framework in which a UAV is paired with an unmanned ground vehicle [UGV]. The UAV follows the UGV with a downward-facing camera and helps the ground vehicle navigate the flooded environment. This work makes several contributions: a simulation environment is created to allow for automated data collection in hypothetical disaster scenarios. The simulation environment uses real-world satellite and elevation data to emulate natural disasters such as floods. The environment partially simulates the dynamics of the UAV and UGV, allowing agents to ex- plore during hypothetical disasters. Several semantic image segmentation models are tested for efficacy in identifying obstacles and creating cost maps for navigation within the environ- ment, as seen by the UAV. A deep homography model incorporates temporal relations across video frames to stitch cost maps together. A weighted version of a navigation algorithm is presented to plan a path through the environment. The synthesis of these modules leads to a novel framework wherein a UAV may guide a UGV safely through a disaster area. | en |
dc.description.abstractgeneral | Damage to infrastructure after a natural disaster can make navigation a major challenge. Imagine a hurricane has hit someone's house; they are hurt and need to go to the hospital. Using a traditional GPS navigation system or even their memory may not work as many roads could be impassible. However, if the GPS could be quickly updated as to which roads were not flooded, it could still be used to navigate and avoid hazards. While the system presented is designed to work with a self-driving vehicle, it could easily be extended to give directions to a human. The goal of this work is to provide a system that could be used as a replacement for a GPS based on aerial photography. The advantage of this system is that flooded or damaged infrastructure can be identified and avoided in real-time. The system could even identify other possible routes by using photography, such as driving across a field to reach higher ground. Like a GPS, the system works automatically, tracking a user's position and sug- gesting turns, aiding navigation. A contribution of this work is a simulation of the environment designed in a video game engine. The game engine creates a video game world that can be flooded and used to test the new navigation system. The video game environment is used to train an artificial intel- ligence computer model to identify hazards and create routes that would avoid them. The system could be used in a real-world disaster following training in a video game world. | en |
dc.description.degree | Master of Science | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:34688 | en |
dc.identifier.uri | http://hdl.handle.net/10919/110937 | en |
dc.language.iso | en | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Path Planning | en |
dc.subject | Computer Vision | en |
dc.subject | Deep Learning | en |
dc.title | Collaborative Path Planning and Control for Ground Agents Via Photography Collected by Unmanned Aerial Vehicles | en |
dc.type | Thesis | en |
thesis.degree.discipline | Computer Engineering | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | Master of Science | en |