Pre Trained
Pre-trained models represent a cornerstone of modern machine learning, aiming to leverage the knowledge learned from massive datasets to improve efficiency and performance on downstream tasks. Current research focuses on adapting these pre-trained models to diverse modalities (e.g., vision, language, audio) and tasks, often employing transformer-based architectures and techniques like transfer learning, parameter-efficient fine-tuning, and contrastive learning. This approach significantly reduces the need for large, task-specific datasets and computational resources, accelerating progress in various fields including medical image analysis, speech recognition, and natural language processing. The resulting improvements in accuracy, efficiency, and generalizability have broad implications for both scientific discovery and practical applications.
Papers
DepthCues: Evaluating Monocular Depth Perception in Large Vision Models
Duolikun Danier, Mehmet Aygün, Changjian Li, Hakan Bilen, Oisin Mac Aodha
Efficient Multi-modal Large Language Models via Visual Token Grouping
Minbin Huang, Runhui Huang, Han Shi, Yimeng Chen, Chuanyang Zheng, Xiangguo Sun, Xin Jiang, Zhenguo Li, Hong Cheng
Integrating Dual Prototypes for Task-Wise Adaption in Pre-Trained Model-Based Class-Incremental Learning
Zhiming Xu, Suorong Yang, Baile Xu, Jian Zhao, Furao Shen
SatVision-TOA: A Geospatial Foundation Model for Coarse-Resolution All-Sky Remote Sensing Imagery
Caleb S. Spradlin, Jordan A. Caraballo-Vega, Jian Li, Mark L. Carroll, Jie Gong, Paul M. Montesano
Enhancing Vision-Language Model Safety through Progressive Concept-Bottleneck-Driven Alignment
Zhendong Liu, Yuanbi Nie, Yingshui Tan, Xiangyu Yue, Qiushi Cui, Chongjun Wang, Xiaoyong Zhu, Bo Zheng
A Pre-Trained Graph-Based Model for Adaptive Sequencing of Educational Documents
Jean Vassoyan (CB), Anan Schütt (UNIA), Jill-Jênn Vie (SODA), Arun-Balajiee Lekshmi-Narayanan (PITT), Elisabeth André (UNIA), Nicolas Vayatis (CB)