Mamba in Mamba
Mamba, a novel state-space model, is being explored as an efficient alternative to Transformers in various sequence modeling tasks. Current research focuses on adapting Mamba architectures for diverse applications, including computer vision, natural language processing, and signal processing, often comparing its performance and efficiency against established methods like Transformers and CNNs. This research aims to improve the speed and scalability of deep learning models while maintaining or exceeding performance, with implications for resource-constrained applications and large-scale deployments. The potential impact spans numerous fields, from medical image analysis and autonomous driving to personalized recommendations and drug discovery.
Papers
Mamba-ST: State Space Model for Efficient Style Transfer
Filippo Botti, Alex Ergasti, Leonardo Rossi, Tomaso Fontanini, Claudio Ferrari, Massimo Bertozzi, Andrea Prati
Leveraging Joint Spectral and Spatial Learning with MAMBA for Multichannel Speech Enhancement
Wenze Ren, Haibin Wu, Yi-Cheng Lin, Xuanjun Chen, Rong Chao, Kuo-Hsuan Hung, You-Jin Li, Wen-Yuan Ting, Hsin-Min Wang, Yu Tsao
MambaFoley: Foley Sound Generation using Selective State-Space Models
Marco Furio Colombo, Francesca Ronchini, Luca Comanducci, Fabio Antonacci
Integration of Mamba and Transformer -- MAT for Long-Short Range Time Series Forecasting with Application to Weather Dynamics
Wenqing Zhang, Junming Huang, Ruotong Wang, Changsong Wei, Wenqian Huang, Yuxin Qiao
Mamba-YOLO-World: Marrying YOLO-World with Mamba for Open-Vocabulary Detection
Haoxuan Wang, Qingdong He, Jinlong Peng, Hao Yang, Mingmin Chi, Yabiao Wang
Tri-Plane Mamba: Efficiently Adapting Segment Anything Model for 3D Medical Images
Hualiang Wang, Yiqun Lin, Xinpeng Ding, Xiaomeng Li