Show simple item record

dc.contributor.authorShao, Yangen
dc.contributor.authorCooner, Austin J.en
dc.contributor.authorWalsh, Stephen J.en
dc.date.accessioned2021-04-26T12:24:03Zen
dc.date.available2021-04-26T12:24:03Zen
dc.date.issued2021-04-15en
dc.identifier.citationShao, Y.; Cooner, A.J.; Walsh, S.J. Assessing Deep Convolutional Neural Networks and Assisted Machine Perception for Urban Mapping. Remote Sens. 2021, 13, 1523.en
dc.identifier.urihttp://hdl.handle.net/10919/103115en
dc.description.abstractHigh-spatial-resolution satellite imagery has been widely applied for detailed urban mapping. Recently, deep convolutional neural networks (DCNNs) have shown promise in certain remote sensing applications, but they are still relatively new techniques for general urban mapping. This study examines the use of two DCNNs (U-Net and VGG16) to provide an automatic schema to support high-resolution mapping of buildings, road/open built-up, and vegetation cover. Using WorldView-2 imagery as input, we first applied an established OBIA method to characterize major urban land cover classes. An OBIA-derived urban map was then divided into a training and testing region to evaluate the DCNNs’ performance. For U-Net mapping, we were particularly interested in how sample size or the number of image tiles affect mapping accuracy. U-Net generated cross-validation accuracies ranging from 40.5 to 95.2% for training sample sizes from 32 to 4096 image tiles (each tile was 256 by 256 pixels). A per-pixel accuracy assessment led to 87.8 percent overall accuracy for the testing region, suggesting U-Net’s good generalization capabilities. For the VGG16 mapping, we proposed an object-based framing paradigm that retains spatial information and assists machine perception through Gaussian blurring. Gaussian blurring was used as a pre-processing step to enhance the contrast between objects of interest and background (contextual) information. Combined with the pre-trained VGG16 and transfer learning, this analytical approach generated a 77.3 percent overall accuracy for per-object assessment. The mapping accuracy could be further improved given more robust segmentation algorithms and better quantity/quality of training samples. Our study shows significant promise for DCNN implementation for urban mapping and our approach can transfer to a number of other remote sensing applications.en
dc.format.mimetypeapplication/pdfen
dc.language.isoenen
dc.publisherMDPIen
dc.rightsCreative Commons Attribution 4.0 Internationalen
dc.rights.urihttp://creativecommons.org/licenses/by/4.0/en
dc.titleAssessing Deep Convolutional Neural Networks and Assisted Machine Perception for Urban Mappingen
dc.typeArticle - Refereeden
dc.date.updated2021-04-23T13:35:53Zen
dc.description.versionPublished versionen
dc.contributor.departmentGeographyen
dc.title.serialRemote Sensingen
dc.identifier.doihttps://doi.org/10.3390/rs13081523en
dc.type.dcmitypeTexten
dc.type.dcmitypeStillImageen


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record

Creative Commons Attribution 4.0 International
License: Creative Commons Attribution 4.0 International