Paper ID: 2310.05697

Combining recurrent and residual learning for deforestation monitoring using multitemporal SAR images

Carla Nascimento Neves, Raul Queiroz Feitosa, Mabel X. Ortega Adarme, Gilson Antonio Giraldi

With its vast expanse, exceeding that of Western Europe by twice, the Amazon rainforest stands as the largest forest of the Earth, holding immense importance in global climate regulation. Yet, deforestation detection from remote sensing data in this region poses a critical challenge, often hindered by the persistent cloud cover that obscures optical satellite data for much of the year. Addressing this need, this paper proposes three deep-learning models tailored for deforestation monitoring, utilizing SAR (Synthetic Aperture Radar) multitemporal data moved by its independence on atmospheric conditions. Specifically, the study proposes three novel recurrent fully convolutional network architectures-namely, RRCNN-1, RRCNN-2, and RRCNN-3, crafted to enhance the accuracy of deforestation detection. Additionally, this research explores replacing a bitemporal with multitemporal SAR sequences, motivated by the hypothesis that deforestation signs quickly fade in SAR images over time. A comprehensive assessment of the proposed approaches was conducted using a Sentinel-1 multitemporal sequence from a sample site in the Brazilian rainforest. The experimental analysis confirmed that analyzing a sequence of SAR images over an observation period can reveal deforestation spots undetectable in a pair of images. Notably, experimental results underscored the superiority of the multitemporal approach, yielding approximately a five percent enhancement in F1-Score across all tested network architectures. Particularly the RRCNN-1 achieved the highest accuracy and also boasted half the processing time of its closest counterpart.

Submitted: Oct 9, 2023