Mamba in Mamba
Mamba, a novel state-space model, is being explored as an efficient alternative to Transformers in various sequence modeling tasks. Current research focuses on adapting Mamba architectures for diverse applications, including computer vision, natural language processing, and signal processing, often comparing its performance and efficiency against established methods like Transformers and CNNs. This research aims to improve the speed and scalability of deep learning models while maintaining or exceeding performance, with implications for resource-constrained applications and large-scale deployments. The potential impact spans numerous fields, from medical image analysis and autonomous driving to personalized recommendations and drug discovery.
Papers
From Pixels to Gigapixels: Bridging Local Inductive Bias and Long-Range Dependencies with Pixel-Mamba
Zhongwei Qiu, Hanqing Chao, Tiancheng Lin, Wanxing Chang, Zijiang Yang, Wenpei Jiao, Yixuan Shen, Yunshuo Zhang, Yelin Yang, Wenbin Liu, Hui Jiang, Yun Bian, Ke Yan, Dakai Jin, Le Lu
MOL-Mamba: Enhancing Molecular Representation with Structural & Electronic Insights
Jingjing Hu, Dan Guo, Zhan Si, Deguang Liu, Yunfeng Diao, Jing Zhang, Jinxing Zhou, Meng Wang
SAM-Mamba: Mamba Guided SAM Architecture for Generalized Zero-Shot Polyp Segmentation
Tapas Kumar Dutta, Snehashis Majhi, Deepak Ranjan Nayak, Debesh Jha
DG-Mamba: Robust and Efficient Dynamic Graph Structure Learning with Selective State Space Models
Haonan Yuan, Qingyun Sun, Zhaonan Wang, Xingcheng Fu, Cheng Ji, Yongjian Wang, Bo Jin, Jianxin Li