Mamba Based
Mamba, a type of state-space model, is a novel deep learning architecture gaining traction for its efficient handling of long sequences, offering a compelling alternative to computationally expensive transformers. Current research focuses on adapting Mamba for various applications, including image classification, segmentation, and fusion; time series forecasting; and natural language processing, often integrating it with convolutional neural networks or graph neural networks to enhance performance. This approach holds significant promise for improving the efficiency and scalability of deep learning models across diverse fields, particularly in resource-constrained environments or when dealing with large datasets. The linear time complexity of Mamba offers a substantial advantage over quadratic-complexity methods, leading to faster inference and reduced memory requirements.
Papers
EMMA: Empowering Multi-modal Mamba with Structural and Hierarchical Alignment
Yifei Xing, Xiangyuan Lan, Ruiping Wang, Dongmei Jiang, Wenjun Huang, Qingfang Zheng, Yaowei Wang
TIMBA: Time series Imputation with Bi-directional Mamba Blocks and Diffusion models
Javier Solís-García, Belén Vega-Márquez, Juan A. Nepomuceno, Isabel A. Nepomuceno-Chamorro