Foundation Model
Foundation models are large, pre-trained AI models designed to generalize across diverse tasks and datasets, offering a powerful alternative to task-specific models. Current research emphasizes adapting these models to various domains, including healthcare (e.g., medical image analysis, EEG interpretation), scientific applications (e.g., genomics, weather forecasting), and robotics, often employing architectures like transformers and mixtures of experts with innovative gating functions. This approach promises to improve efficiency and accuracy in numerous fields by leveraging the knowledge embedded within these powerful models, streamlining data analysis and enabling new applications previously hindered by data scarcity or computational limitations.
Papers
Does Data-Efficient Generalization Exacerbate Bias in Foundation Models?
Dilermando Queiroz, Anderson Carlos, Maíra Fatoretto, André Anjos, Lilian Berton, Luis Filipe Nakayama
Benchmarking foundation models as feature extractors for weakly-supervised computational pathology
Peter Neidlinger, Omar S. M. El Nahhas, Hannah Sophie Muti, Tim Lenz, Michael Hoffmeister, Hermann Brenner, Marko van Treeck, Rupert Langer, Bastian Dislich, Hans Michael Behrens, Christoph Röcken, Sebastian Foersch, Daniel Truhn, Antonio Marra, Oliver Lester Saldanha, Jakob Nikolas Kather
Exploring Selective Layer Fine-Tuning in Federated Learning
Yuchang Sun, Yuexiang Xie, Bolin Ding, Yaliang Li, Jun Zhang
Foundation Models for Music: A Survey
Yinghao Ma, Anders Øland, Anton Ragni, Bleiz MacSen Del Sette, Charalampos Saitis, Chris Donahue, Chenghua Lin, Christos Plachouras, Emmanouil Benetos, Elio Quinton, Elona Shatri, Fabio Morreale, Ge Zhang, György Fazekas, Gus Xia, Huan Zhang, Ilaria Manco, Jiawen Huang, Julien Guinot, Liwei Lin, Luca Marinelli, Max W. Y. Lam, Megha Sharma, Qiuqiang Kong, Roger B. Dannenberg, Ruibin Yuan, Shangda Wu, Shih-Lun Wu, Shuqi Dai, Shun Lei, Shiyin Kang, Simon Dixon, Wenhu Chen, Wenhao Huang, Xingjian Du, Xingwei Qu, Xu Tan, Yizhi Li, Zeyue Tian, Zhiyong Wu, Zhizheng Wu, Ziyang Ma, Ziyu Wang
ShapeMamba-EM: Fine-Tuning Foundation Model with Local Shape Descriptors and Mamba Blocks for 3D EM Image Segmentation
Ruohua Shi, Qiufan Pang, Lei Ma, Lingyu Duan, Tiejun Huang, Tingting Jiang
Re-Mix: Optimizing Data Mixtures for Large Scale Imitation Learning
Joey Hejna, Chethan Bhateja, Yichen Jian, Karl Pertsch, Dorsa Sadigh
Sliding Window Training -- Utilizing Historical Recommender Systems Data for Foundation Models
Swanand Joshi, Yesu Feng, Ko-Jen Hsiao, Zhe Zhang, Sudarshan Lamkhede
Practical token pruning for foundation models in few-shot conversational virtual assistant systems
Haode Qi, Cheng Qian, Jian Ni, Pratyush Singh, Reza Fazeli, Gengyu Wang, Zhongzheng Shu, Eric Wayne, Juergen Bross
ViLReF: An Expert Knowledge Enabled Vision-Language Retinal Foundation Model
Shengzhu Yang, Jiawei Du, Jia Guo, Weihang Zhang, Hanruo Liu, Huiqi Li, Ningli Wang
Benchmarking Large Language Models for Math Reasoning Tasks
Kathrin Seßler, Yao Rong, Emek Gözlüklü, Enkelejda Kasneci
From Glucose Patterns to Health Outcomes: A Generalizable Foundation Model for Continuous Glucose Monitor Data Analysis
Guy Lutsker, Gal Sapir, Anastasia Godneva, Smadar Shilo, Jerry R Greenfield, Dorit Samocha-Bonet, Shie Mannor, Eli Meirom, Gal Chechik, Hagai Rossman, Eran Segal
Towards Foundation Models for the Industrial Forecasting of Chemical Kinetics
Imran Nasim, Joaõ Lucas de Sousa Almeida
Fine-Tuning and Deploying Large Language Models Over Edges: Issues and Approaches
Yanjie Dong, Haijun Zhang, Chengming Li, Song Guo, Victor C. M. Leung, Xiping Hu