Remote Sensing Image
Remote sensing image analysis focuses on extracting meaningful information from images captured by satellites and aerial platforms, primarily for Earth observation applications. Current research emphasizes improving the accuracy and efficiency of various tasks, including semantic segmentation, object detection (especially oriented objects), and change detection, often leveraging deep learning models like transformers and UNets, along with innovative techniques such as prompt learning and multimodal fusion. These advancements are crucial for a wide range of applications, from precision agriculture and urban planning to environmental monitoring and disaster response, enabling more accurate and timely insights from remotely sensed data.
Papers
SACANet: scene-aware class attention network for semantic segmentation of remote sensing images
Xiaowen Ma, Rui Che, Tingfeng Hong, Mengting Ma, Ziyan Zhao, Tian Feng, Wei Zhang
STNet: Spatial and Temporal feature fusion network for change detection in remote sensing images
Xiaowen Ma, Jiawei Yang, Tingfeng Hong, Mengting Ma, Ziyan Zhao, Tian Feng, Wei Zhang
Domain Adaptable Self-supervised Representation Learning on Remote Sensing Satellite Imagery
Muskaan Chopra, Prakash Chandra Chhipa, Gopal Mengi, Varun Gupta, Marcus Liwicki
CMID: A Unified Self-Supervised Learning Framework for Remote Sensing Image Understanding
Dilxat Muhtar, Xueliang Zhang, Pengfeng Xiao, Zhenshi Li, Feng Gu