COCO-Bridge: Common Objects in Context Dataset and Benchmark for Structural Detail Detection of Bridges
dc.contributor.author | Bianchi, Eric Loran | en |
dc.contributor.committeechair | Hebdon, Matthew H. | en |
dc.contributor.committeemember | Abbott, A. Lynn | en |
dc.contributor.committeemember | Koutromanos, Ioannis | en |
dc.contributor.department | Civil and Environmental Engineering | en |
dc.date.accessioned | 2019-02-15T09:00:40Z | en |
dc.date.available | 2019-02-15T09:00:40Z | en |
dc.date.issued | 2019-02-14 | en |
dc.description.abstract | Common Objects in Context for bridge inspection (COCO-Bridge) was introduced for use by unmanned aircraft systems (UAS) to assist in GPS denied environments, flight-planning, and detail identification and contextualization, but has far-reaching applications such as augmented reality (AR) and other artificial intelligence (AI) platforms. COCO-Bridge is an annotated dataset which can be trained using a convolutional neural network (CNN) to identify specific structural details. Many annotated datasets have been developed to detect regions of interest in images for a wide variety of applications and industries. While some annotated datasets of structural defects (primarily cracks) have been developed, most efforts are individualized and focus on a small niche of the industry. This effort initiated a benchmark dataset with a focus on structural details. This research investigated the required parameters for detail identification and evaluated performance enhancements on the annotation process. The image dataset consisted of four structural details which are commonly reviewed and rated during bridge inspections: bearings, cover plate terminations, gusset plate connections, and out of plane stiffeners. This initial version of COCO-Bridge includes a total of 774 images; 10% for evaluation and 90% for training. Several models were used with the dataset to evaluate model overfitting and performance enhancements from augmentation and number of iteration steps. Methods to economize the predictive capabilities of the model without the addition of unique data were investigated to reduce the required number of training images. Results from model tests indicated the following: additional images, mirrored along the vertical-axis, provided precision and accuracy enhancements; increasing computational step iterations improved predictive precision and accuracy, and the optimal confidence threshold for operation was 25%. Annotation recommendations and improvements were also discovered and documented as a result of the research. | en |
dc.description.abstractgeneral | Common Objects in Context for bridge inspection (COCO-Bridge) was introduced to improve a drone-conducted bridge inspection process. Drones are a great tool for bridge inspectors because they bring flexibility and access to the inspection. However, drones have a notoriously difficult time operating near bridges, because the signal can be lost between the operator and the drone. COCO-Bridge is an imagebased dataset that uses Artificial Intelligence (AI) as a solution to this particular problem, but has applications in other facets of the inspection as well. This effort initiated a dataset with a focus on identifying specific parts of a bridge or structural bridge elements. This would allow a drone to fly without explicit direction if the signal was lost, and also has the potential to extend its flight time. Extending flight time and operating autonomously are great advantagesfor drone operators and bridge inspectors. The output from COCO-Bridge would also help the inspectors identify areas that are prone to defects by highlighting regions that require inspection. The image dataset consisted of 774 images to detect four structural bridge elements which are commonly reviewed and rated during bridge inspections. The goal is to continue to increase the number of images and encompass more structural bridge elements in the dataset so that it may be used for all types of bridges. Methods to reduce the required number of images were investigated, because gathering images of structural bridge elements is challenging,. The results from model tests helped build a roadmap for the expansion and best-practices for developing a dataset of this type. | en |
dc.description.degree | MS | en |
dc.format.medium | ETD | en |
dc.identifier.other | vt_gsexam:18727 | en |
dc.identifier.uri | http://hdl.handle.net/10919/87588 | en |
dc.publisher | Virginia Tech | en |
dc.rights | In Copyright | en |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | en |
dc.subject | Convolutional neural network | en |
dc.subject | bridge inspection | en |
dc.subject | UAS | en |
dc.subject | CNN | en |
dc.subject | Artificial Intelligence | en |
dc.subject | Augmented Reality | en |
dc.subject | Deep learning (Machine learning) | en |
dc.subject | Machine learning | en |
dc.title | COCO-Bridge: Common Objects in Context Dataset and Benchmark for Structural Detail Detection of Bridges | en |
dc.type | Thesis | en |
thesis.degree.discipline | Civil Engineering | en |
thesis.degree.grantor | Virginia Polytechnic Institute and State University | en |
thesis.degree.level | masters | en |
thesis.degree.name | MS | en |
Files
Original bundle
1 - 1 of 1