Mamba in Mamba
Mamba, a novel state-space model, is being explored as an efficient alternative to Transformers in various sequence modeling tasks. Current research focuses on adapting Mamba architectures for diverse applications, including computer vision, natural language processing, and signal processing, often comparing its performance and efficiency against established methods like Transformers and CNNs. This research aims to improve the speed and scalability of deep learning models while maintaining or exceeding performance, with implications for resource-constrained applications and large-scale deployments. The potential impact spans numerous fields, from medical image analysis and autonomous driving to personalized recommendations and drug discovery.
Papers
MambaCSR: Dual-Interleaved Scanning for Compressed Image Super-Resolution With SSMs
Yulin Ren, Xin Li, Mengxi Guo, Bingchen Li, Shijie Zhao, Zhibo Chen
UNetMamba: An Efficient UNet-Like Mamba for Semantic Segmentation of High-Resolution Remote Sensing Images
Enze Zhu, Zhan Chen, Dingkai Wang, Hanru Shi, Xiaoxuan Liu, Lei Wang
MambaDS: Near-Surface Meteorological Field Downscaling with Topography Constrained Selective State Space Modeling
Zili Liu, Hao Chen, Lei Bai, Wenyuan Li, Wanli Ouyang, Zhengxia Zou, Zhenwei Shi
MUSE: Mamba is Efficient Multi-scale Learner for Text-video Retrieval
Haoran Tang, Meng Cao, Jinfa Huang, Ruyang Liu, Peng Jin, Ge Li, Xiaodan Liang
ColorMamba: Towards High-quality NIR-to-RGB Spectral Translation with Mamba
Huiyu Zhai, Guang Jin, Xingxing Yang, Guosheng Kang
MambaMIM: Pre-training Mamba with State Space Token-interpolation
Fenghe Tang, Bingkun Nian, Yingtai Li, Jie Yang, Liu Wei, S. Kevin Zhou
MambaVT: Spatio-Temporal Contextual Modeling for robust RGB-T Tracking
Simiao Lai, Chang Liu, Jiawen Zhu, Ben Kang, Yang Liu, Dong Wang, Huchuan Lu