Foundational Model
Foundational models (FMs) are large-scale, pre-trained machine learning models designed to learn generalized patterns from massive datasets, enabling adaptation to diverse downstream tasks with minimal additional training. Current research emphasizes applying FMs across various data modalities (text, images, tabular data, molecular structures, brain signals) and exploring efficient fine-tuning techniques like parameter-efficient fine-tuning and prompting. This approach promises to improve the efficiency and generalizability of AI systems, impacting fields like medical imaging, drug discovery, and manufacturing through improved accuracy and reduced reliance on extensive labeled data for specific tasks.
Papers
November 14, 2024
November 11, 2024
November 8, 2024
October 24, 2024
October 21, 2024
October 19, 2024
October 15, 2024
October 9, 2024
September 22, 2024
September 18, 2024
September 12, 2024
August 23, 2024
August 13, 2024
August 11, 2024
July 18, 2024
July 9, 2024
June 26, 2024
June 23, 2024
June 14, 2024