Christie, Gordon A.Shoemaker, AdamKochersberger, Kevin B.Tokekar, PratapMcLean, LanceLeonessa, Alexander2017-12-072017-12-072016-08-31http://hdl.handle.net/10919/81077Autonomously searching for hazardous radiation sources requires the ability of the aerial and ground systems to understand the scene they are scouting. In this paper, we present systems, algorithms, and experiments to perform radiation search using unmanned aerial vehicles (UAV) and unmanned ground vehicles (UGV) by employing semantic scene segmentation. The aerial data is used to identify radiological points of interest, generate an orthophoto along with a digital elevation model (DEM) of the scene, and perform semantic segmentation to assign a category (e.g. road, grass) to each pixel in the orthophoto. We perform semantic segmentation by training a model on a dataset of images we collected and annotated, using the model to perform inference on images of the test area unseen to the model, and then re fining the results with the DEM to better reason about category predictions at each pixel. We then use all of these outputs to plan a path for a UGV carrying a LiDAR to map the environment and avoid obstacles not present during the flight, and a radiation detector to collect more precise radiation measurements from the ground. Results of the analysis for each scenario tested favorably. We also note that our approach is general and has the potential to work for a variety of diff erent sensing tasks.en-USIn CopyrightRadiation Search Operations using Scene Understanding with Autonomous UAV and UGVArticlehttps://arxiv.org/abs/1609.00017