Transformer Encoders
Transformer encoders are neural network architectures designed to process sequential data by leveraging self-attention mechanisms to capture long-range dependencies. Current research focuses on improving their efficiency, particularly for long sequences, through techniques like progressive token length scaling and optimized hardware acceleration, as well as exploring their expressivity and limitations in various applications. These advancements are driving significant improvements in diverse fields, including natural language processing, computer vision, and speech recognition, by enabling more accurate and efficient models for tasks such as machine translation, image segmentation, and speech diarization.
Papers
November 11, 2024
November 3, 2024
October 23, 2024
September 21, 2024
April 23, 2024
March 23, 2024
March 21, 2024
February 23, 2024
February 18, 2024
February 15, 2024
February 8, 2024
January 15, 2024
November 5, 2023
October 21, 2023
October 11, 2023
October 5, 2023
October 3, 2023
June 8, 2023
May 31, 2023
May 21, 2023