Transformer Based
Transformer-based models are revolutionizing various fields by leveraging self-attention mechanisms to capture long-range dependencies in sequential data, achieving state-of-the-art results in tasks ranging from natural language processing and image recognition to time series forecasting and robotic control. Current research focuses on improving efficiency (e.g., through quantization and optimized architectures), enhancing generalization capabilities, and addressing challenges like handling long sequences and endogeneity. These advancements are significantly impacting diverse scientific communities and practical applications, leading to more accurate, efficient, and robust models across numerous domains.
Papers
DeciMamba: Exploring the Length Extrapolation Potential of Mamba
Assaf Ben-Kish, Itamar Zimerman, Shady Abu-Hussein, Nadav Cohen, Amir Globerson, Lior Wolf, Raja Giryes
Complexity of Symbolic Representation in Working Memory of Transformer Correlates with the Complexity of a Task
Alsu Sagirova, Mikhail Burtsev
Towards Infinite-Long Prefix in Transformer
Yingyu Liang, Zhenmei Shi, Zhao Song, Chiwun Yang
GROD: Enhancing Generalization of Transformer with Out-of-Distribution Detection
Yijin Zhou, Yuguang Wang
Vision Transformer Segmentation for Visual Bird Sound Denoising
Sahil Kumar, Jialu Li, Youshan Zhang
How structured are the representations in transformer-based vision encoders? An analysis of multi-object representations in vision-language models
Tarun Khajuria, Braian Olmiro Dias, Jaan Aru
Fredformer: Frequency Debiased Transformer for Time Series Forecasting
Xihao Piao, Zheng Chen, Taichi Murayama, Yasuko Matsubara, Yasushi Sakurai
Cognitively Inspired Energy-Based World Models
Alexi Gladstone, Ganesh Nanduru, Md Mofijul Islam, Aman Chadha, Jundong Li, Tariq Iqbal
Learning to Play Atari in a World of Tokens
Pranav Agarwal, Sheldon Andrews, Samira Ebrahimi Kahou
Compute-Efficient Medical Image Classification with Softmax-Free Transformers and Sequence Normalization
Firas Khader, Omar S. M. El Nahhas, Tianyu Han, Gustav Müller-Franzes, Sven Nebelung, Jakob Nikolas Kather, Daniel Truhn