Transformer Based
Transformer-based models are revolutionizing various fields by leveraging self-attention mechanisms to capture long-range dependencies in sequential data, achieving state-of-the-art results in tasks ranging from natural language processing and image recognition to time series forecasting and robotic control. Current research focuses on improving efficiency (e.g., through quantization and optimized architectures), enhancing generalization capabilities, and addressing challenges like handling long sequences and endogeneity. These advancements are significantly impacting diverse scientific communities and practical applications, leading to more accurate, efficient, and robust models across numerous domains.
836papers
Papers - Page 27
October 5, 2023
October 2, 2023
September 28, 2023
September 26, 2023
September 23, 2023
September 22, 2023
Transformer-based Image Compression with Variable Image Quality Objectives
AMPLIFY:Attention-based Mixup for Performance Improvement and Label Smoothing in Transformer
TrTr: A Versatile Pre-Trained Large Traffic Model based on Transformer for Capturing Trajectory Diversity in Vehicle Population
Vision Transformers for Computer Go
SPION: Layer-Wise Sparse Training of Transformer via Convolutional Flood Filling
September 21, 2023
September 18, 2023
September 17, 2023