Pre Trained Transformer
Pre-trained transformer models are foundational neural networks achieving state-of-the-art results across diverse tasks by leveraging massive datasets for initial training, followed by fine-tuning for specific applications. Current research emphasizes improving efficiency, including parameter reduction techniques like low-rank factorization and early exit strategies, and exploring effective transfer learning methods across modalities (e.g., image to video, text to speech). This work is significant because it enables the application of powerful transformer architectures to resource-constrained settings and expands their utility beyond their original training domains, impacting fields from natural language processing and computer vision to medical image analysis and even military strategy.
Papers
Explain to me like I am five -- Sentence Simplification Using Transformers
Aman Agarwal
Momentum Calibration for Text Generation
Xingxing Zhang, Yiran Liu, Xun Wang, Pengcheng He, Yang Yu, Si-Qing Chen, Wayne Xiong, Furu Wei
Federated Learning for Inference at Anytime and Anywhere
Zicheng Liu, Da Li, Javier Fernandez-Marques, Stefanos Laskaridis, Yan Gao, Łukasz Dudziak, Stan Z. Li, Shell Xu Hu, Timothy Hospedales