Transformer Based
Transformer-based models are revolutionizing various fields by leveraging self-attention mechanisms to capture long-range dependencies in sequential data, achieving state-of-the-art results in tasks ranging from natural language processing and image recognition to time series forecasting and robotic control. Current research focuses on improving efficiency (e.g., through quantization and optimized architectures), enhancing generalization capabilities, and addressing challenges like handling long sequences and endogeneity. These advancements are significantly impacting diverse scientific communities and practical applications, leading to more accurate, efficient, and robust models across numerous domains.
Papers
Transformer-based conditional generative adversarial network for multivariate time series generation
Abdellah Madane, Mohamed-djallel Dilmi, Florent Forest, Hanane Azzag, Mustapha Lebbah, Jerome Lacaille
Point Cloud Recognition with Position-to-Structure Attention Transformers
Zheng Ding, James Hou, Zhuowen Tu
TgDLF2.0: Theory-guided deep-learning for electrical load forecasting via Transformer and transfer learning
Jiaxin Gao, Wenbo Hu, Dongxiao Zhang, Yuntian Chen
Is More Data Better? Re-thinking the Importance of Efficiency in Abusive Language Detection with Transformers-Based Active Learning
Hannah Rose Kirk, Bertie Vidgen, Scott A. Hale
RNGDet++: Road Network Graph Detection by Transformer with Instance Segmentation and Multi-scale Features Enhancement
Zhenhua Xu, Yuxuan Liu, Yuxiang Sun, Ming Liu, Lujia Wang