Transformer Based
Transformer-based models are revolutionizing various fields by leveraging self-attention mechanisms to capture long-range dependencies in sequential data, achieving state-of-the-art results in tasks ranging from natural language processing and image recognition to time series forecasting and robotic control. Current research focuses on improving efficiency (e.g., through quantization and optimized architectures), enhancing generalization capabilities, and addressing challenges like handling long sequences and endogeneity. These advancements are significantly impacting diverse scientific communities and practical applications, leading to more accurate, efficient, and robust models across numerous domains.
Papers
Fine-tuning Transformer-based Encoder for Turkish Language Understanding Tasks
Savas Yildirim
CAFCT-Net: A CNN-Transformer Hybrid Network with Contextual and Attentional Feature Fusion for Liver Tumor Segmentation
Ming Kang, Chee-Ming Ting, Fung Fung Ting, Raphaël Phan
ILBiT: Imitation Learning for Robot Using Position and Torque Information based on Bilateral Control with Transformer
Masato Kobayashi, Thanpimon Buamanee, Yuki Uranishi, Haruo Takemura