Wang, LinhanLei, ShuoHe, JianfengWang, ShengkunZhang, MinLu, Chang-Tien2024-03-012024-03-012023-11-13https://hdl.handle.net/10919/118234Remote sensing image semantic segmentation is an important problem for remote sensing image interpretation. Although remarkable progress has been achieved, existing deep neural network methods suffer from the reliance on massive training data. Few-shot remote sensing semantic segmentation aims at learning to segment target objects from a query image using only a few annotated support images of the target class. Most existing few-shot learning methods stem primarily from their sole focus on extracting information from support images, thereby failing to effectively address the large variance in appearance and scales of geographic objects. To tackle these challenges, we propose a Self-Correlation and Cross-Correlation Learning Network for the few-shot remote sensing image semantic segmentation. Our model enhances the generalization by considering both self-correlation and cross-correlation between support and query images to make segmentation predictions. To further explore the self-correlation with the query image, we propose to adopt a classical spectral method to produce a class-agnostic segmentation mask based on the basic visual information of the image. Extensive experiments on two remote sensing image datasets demonstrate the effectiveness and superiority of our model in few-shot remote sensing image semantic segmentation. The code is available at https://github.com/linhanwang/SCCNet.application/pdfenCreative Commons Attribution 4.0 InternationalSelf-Correlation and Cross-Correlation Learning for Few-Shot Remote Sensing Image Semantic SegmentationArticle - Refereed2024-01-01The author(s)https://doi.org/10.1145/3589132.3625570