Paper ID: 2206.01096

A Dual-fusion Semantic Segmentation Framework With GAN For SAR Images

Donghui Li, Jia Liu, Fang Liu, Wenhua Zhang, Andi Zhang, Wenfei Gao, Jiao Shi

Deep learning based semantic segmentation is one of the popular methods in remote sensing image segmentation. In this paper, a network based on the widely used encoderdecoder architecture is proposed to accomplish the synthetic aperture radar (SAR) images segmentation. With the better representation capability of optical images, we propose to enrich SAR images with generated optical images via the generative adversative network (GAN) trained by numerous SAR and optical images. These optical images can be used as expansions of original SAR images, thus ensuring robust result of segmentation. Then the optical images generated by the GAN are stitched together with the corresponding real images. An attention module following the stitched data is used to strengthen the representation of the objects. Experiments indicate that our method is efficient compared to other commonly used methods

Submitted: Jun 2, 2022