Pre Trained
Pre-trained models represent a cornerstone of modern machine learning, aiming to leverage the knowledge learned from massive datasets to improve efficiency and performance on downstream tasks. Current research focuses on adapting these pre-trained models to diverse modalities (e.g., vision, language, audio) and tasks, often employing transformer-based architectures and techniques like transfer learning, parameter-efficient fine-tuning, and contrastive learning. This approach significantly reduces the need for large, task-specific datasets and computational resources, accelerating progress in various fields including medical image analysis, speech recognition, and natural language processing. The resulting improvements in accuracy, efficiency, and generalizability have broad implications for both scientific discovery and practical applications.
Papers
Contrastive-Adversarial and Diffusion: Exploring pre-training and fine-tuning strategies for sulcal identification
Michail Mamalakis, Héloïse de Vareilles, Shun-Chin Jim Wu, Ingrid Agartz, Lynn Egeland Mørch-Johnsen, Jane Garrison, Jon Simons, Pietro Lio, John Suckling, Graham Murray
T2V-Turbo: Breaking the Quality Bottleneck of Video Consistency Model with Mixed Reward Feedback
Jiachen Li, Weixi Feng, Tsu-Jui Fu, Xinyi Wang, Sugato Basu, Wenhu Chen, William Yang Wang
Data-Efficient Approach to Humanoid Control via Fine-Tuning a Pre-Trained GPT on Action Data
Siddharth Padmanabhan, Kazuki Miyazawa, Takato Horii, Takayuki Nagai
Pretrained Mobility Transformer: A Foundation Model for Human Mobility
Xinhua Wu, Haoyu He, Yanchao Wang, Qi Wang
Recasting Generic Pretrained Vision Transformers As Object-Centric Scene Encoders For Manipulation Policies
Jianing Qian, Anastasios Panagopoulos, Dinesh Jayaraman
Stacking Your Transformers: A Closer Look at Model Growth for Efficient LLM Pre-Training
Wenyu Du, Tongxu Luo, Zihan Qiu, Zeyu Huang, Yikang Shen, Reynold Cheng, Yike Guo, Jie Fu
Expert-Token Resonance: Redefining MoE Routing through Affinity-Driven Active Selection
Jing Li, Zhijie Sun, Dachao Lin, Xuan He, Yi Lin, Binfan Zheng, Li Zeng, Rongqian Zhao, Xin Chen
Feature Fusion for Improved Classification: Combining Dempster-Shafer Theory and Multiple CNN Architectures
Ayyub Alzahem, Wadii Boulila, Maha Driss, Anis Koubaa
What Variables Affect Out-of-Distribution Generalization in Pretrained Models?
Md Yousuf Harun, Kyungbok Lee, Jhair Gallardo, Giri Krishnan, Christopher Kanan
WeatherFormer: A Pretrained Encoder Model for Learning Robust Weather Representations from Small Datasets
Adib Hasan, Mardavij Roozbehani, Munther Dahleh
Audio Mamba: Pretrained Audio State Space Model For Audio Tagging
Jiaju Lin, Haoxuan Hu
Safety Alignment for Vision Language Models
Zhendong Liu, Yuanbi Nie, Yingshui Tan, Xiangyu Yue, Qiushi Cui, Chongjun Wang, Xiaoyong Zhu, Bo Zheng
Adapting Large Multimodal Models to Distribution Shifts: The Role of In-Context Learning
Guanglin Zhou, Zhongyi Han, Shiming Chen, Biwei Huang, Liming Zhu, Salman Khan, Xin Gao, Lina Yao
Slicedit: Zero-Shot Video Editing With Text-to-Image Diffusion Models Using Spatio-Temporal Slices
Nathaniel Cohen, Vladimir Kulikov, Matan Kleiner, Inbar Huberman-Spiegelglas, Tomer Michaeli