Transformer Based
Transformer-based models are revolutionizing various fields by leveraging self-attention mechanisms to capture long-range dependencies in sequential data, achieving state-of-the-art results in tasks ranging from natural language processing and image recognition to time series forecasting and robotic control. Current research focuses on improving efficiency (e.g., through quantization and optimized architectures), enhancing generalization capabilities, and addressing challenges like handling long sequences and endogeneity. These advancements are significantly impacting diverse scientific communities and practical applications, leading to more accurate, efficient, and robust models across numerous domains.
Papers
SkinFormer: Learning Statistical Texture Representation with Transformer for Skin Lesion Segmentation
Rongtao Xu, Changwei Wang, Jiguang Zhang, Shibiao Xu, Weiliang Meng, Xiaopeng Zhang
Integration of Mamba and Transformer -- MAT for Long-Short Range Time Series Forecasting with Application to Weather Dynamics
Wenqing Zhang, Junming Huang, Ruotong Wang, Changsong Wei, Wenqian Huang, Yuxin Qiao
Mamba or Transformer for Time Series Forecasting? Mixture of Universals (MoU) Is All You Need
Sijia Peng, Yun Xiong, Yangyong Zhu, Zhiqiang Shen
SITransformer: Shared Information-Guided Transformer for Extreme Multimodal Summarization
Sicheng Liu, Lintao Wang, Xiaogan Zhu, Xuequan Lu, Zhiyong Wang, Kun Hu
TempoFormer: A Transformer for Temporally-aware Representations in Change Detection
Talia Tseriotou, Adam Tsakalidis, Maria Liakata