Transformer Based
Transformer-based models are revolutionizing various fields by leveraging self-attention mechanisms to capture long-range dependencies in sequential data, achieving state-of-the-art results in tasks ranging from natural language processing and image recognition to time series forecasting and robotic control. Current research focuses on improving efficiency (e.g., through quantization and optimized architectures), enhancing generalization capabilities, and addressing challenges like handling long sequences and endogeneity. These advancements are significantly impacting diverse scientific communities and practical applications, leading to more accurate, efficient, and robust models across numerous domains.
Papers
A Transformer Framework for Data Fusion and Multi-Task Learning in Smart Cities
Alexander C. DeRieux, Walid Saad, Wangda Zuo, Rachmawan Budiarto, Mochamad Donny Koerniawan, Dwi Novitasari
GPS++: An Optimised Hybrid MPNN/Transformer for Molecular Property Prediction
Dominic Masters, Josef Dean, Kerstin Klaser, Zhiyi Li, Sam Maddrell-Mander, Adam Sanders, Hatem Helal, Deniz Beker, Ladislav Rampášek, Dominique Beaini
Delving into Transformer for Incremental Semantic Segmentation
Zekai Xu, Mingyi Zhang, Jiayue Hou, Xing Gong, Chuan Wen, Chengjie Wang, Junge Zhang
Vision Transformers in Medical Imaging: A Review
Emerald U. Henry, Onyeka Emebob, Conrad Asotie Omonhinmin