Paper ID: 2409.09670
Unsupervised Hyperspectral and Multispectral Image Blind Fusion Based on Deep Tucker Decomposition Network with Spatial-Spectral Manifold Learning
He Wang, Yang Xu, Zebin Wu, Zhihui Wei
Hyperspectral and multispectral image fusion aims to generate high spectral and spatial resolution hyperspectral images (HR-HSI) by fusing high-resolution multispectral images (HR-MSI) and low-resolution hyperspectral images (LR-HSI). However, existing fusion methods encounter challenges such as unknown degradation parameters, incomplete exploitation of the correlation between high-dimensional structures and deep image features. To overcome these issues, in this article, an unsupervised blind fusion method for hyperspectral and multispectral images based on Tucker decomposition and spatial spectral manifold learning (DTDNML) is proposed. We design a novel deep Tucker decomposition network that maps LR-HSI and HR-MSI into a consistent feature space, achieving reconstruction through decoders with shared parameter. To better exploit and fuse spatial-spectral features in the data, we design a core tensor fusion network that incorporates a spatial spectral attention mechanism for aligning and fusing features at different scales. Furthermore, to enhance the capacity in capturing global information, a Laplacian-based spatial-spectral manifold constraints is introduced in shared-decoders. Sufficient experiments have validated that this method enhances the accuracy and efficiency of hyperspectral and multispectral fusion on different remote sensing datasets. The source code is available at this https URL.
Submitted: Sep 15, 2024