Foundation Model
Foundation models are large, pre-trained AI models designed to generalize across diverse tasks and datasets, offering a powerful alternative to task-specific models. Current research emphasizes adapting these models to various domains, including healthcare (e.g., medical image analysis, EEG interpretation), scientific applications (e.g., genomics, weather forecasting), and robotics, often employing architectures like transformers and mixtures of experts with innovative gating functions. This approach promises to improve efficiency and accuracy in numerous fields by leveraging the knowledge embedded within these powerful models, streamlining data analysis and enabling new applications previously hindered by data scarcity or computational limitations.
Papers
Single Parent Family: A Spectrum of Family Members from a Single Pre-Trained Foundation Model
Habib Hajimolahoseini, Mohammad Hassanpour, Foozhan Ataiefard, Boxing Chen, Yang Liu
CMMaTH: A Chinese Multi-modal Math Skill Evaluation Benchmark for Foundation Models
Zhong-Zhi Li, Ming-Liang Zhang, Fei Yin, Zhi-Long Ji, Jin-Feng Bai, Zhen-Ru Pan, Fan-Hu Zeng, Jian Xu, Jia-Xin Zhang, Cheng-Lin Liu
Rethinking harmless refusals when fine-tuning foundation models
Florin Pop, Judd Rosenblatt, Diogo Schwerz de Lucena, Michael Vaiana
Meta Large Language Model Compiler: Foundation Models of Compiler Optimization
Chris Cummins, Volker Seeker, Dejan Grubisic, Baptiste Roziere, Jonas Gehring, Gabriel Synnaeve, Hugh Leather
MCNC: Manifold Constrained Network Compression
Chayne Thrash, Ali Abbasi, Parsa Nooralinejad, Soroush Abbasi Koohpayegani, Reed Andreas, Hamed Pirsiavash, Soheil Kolouri
WV-Net: A foundation model for SAR WV-mode satellite imagery trained using contrastive self-supervised learning on 10 million images
Yannik Glaser, Justin E. Stopa, Linnea M. Wolniewicz, Ralph Foster, Doug Vandemark, Alexis Mouche, Bertrand Chapron, Peter Sadowski
Evaluating and Benchmarking Foundation Models for Earth Observation and Geospatial AI
Nikolaos Dionelis, Casper Fibaek, Luke Camilleri, Andreas Luyts, Jente Bosmans, Bertrand Le Saux
Foundational Models for Pathology and Endoscopy Images: Application for Gastric Inflammation
Hamideh Kerdegari, Kyle Higgins, Dennis Veselkov, Ivan Laponogov, Inese Polaka, Miguel Coimbra, Junior Andrea Pescino, Marcis Leja, Mario Dinis-Ribeiro, Tania Fleitas Kanonnikoff, Kirill Veselkov
Zero-shot prompt-based classification: topic labeling in times of foundation models in German Tweets
Simon Münker, Kai Kugler, Achim Rettinger
Foundation Models for ECG: Leveraging Hybrid Self-Supervised Learning for Advanced Cardiac Diagnostics
Junho Song, Jong-Hwan Jang, Byeong Tak Lee, DongGyun Hong, Joon-myoung Kwon, Yong-Yeon Jo
Learning on Transformers is Provable Low-Rank and Sparse: A One-layer Analysis
Hongkang Li, Meng Wang, Shuai Zhang, Sijia Liu, Pin-Yu Chen
The Responsible Foundation Model Development Cheatsheet: A Review of Tools & Resources
Shayne Longpre, Stella Biderman, Alon Albalak, Hailey Schoelkopf, Daniel McDuff, Sayash Kapoor, Kevin Klyman, Kyle Lo, Gabriel Ilharco, Nay San, Maribeth Rauh, Aviya Skowron, Bertie Vidgen, Laura Weidinger, Arvind Narayanan, Victor Sanh, David Adelani, Percy Liang, Rishi Bommasani, Peter Henderson, Sasha Luccioni, Yacine Jernite, Luca Soldaini
Building on Efficient Foundations: Effectively Training LLMs with Structured Feedforward Layers
Xiuying Wei, Skander Moalla, Razvan Pascanu, Caglar Gulcehre
Any360D: Towards 360 Depth Anything with Unlabeled 360 Data and M\"obius Spatial Augmentation
Zidong Cao, Jinjing Zhu, Weiming Zhang, Lin Wang
Biomedical Visual Instruction Tuning with Clinician Preference Alignment
Hejie Cui, Lingjun Mao, Xin Liang, Jieyu Zhang, Hui Ren, Quanzheng Li, Xiang Li, Carl Yang