Adaptation Method
Adaptation methods in machine learning focus on efficiently modifying pre-trained models for new tasks or domains, minimizing computational cost and preventing catastrophic forgetting. Current research emphasizes parameter-efficient techniques like adapters and low-rank adaptations (LoRA), applied to various architectures including large language models (LLMs), vision-language models (VLMs), and vision transformers, often incorporating strategies to improve robustness against data corruption. These advancements are crucial for deploying large models in resource-constrained environments and enhancing their generalizability and reliability across diverse applications.
Papers
September 12, 2024
July 11, 2024
June 20, 2024
May 9, 2024
April 20, 2024
February 5, 2024
October 12, 2023
September 20, 2023
June 9, 2023