Adaptation Concern
Adaptation concern in machine learning focuses on efficiently tailoring large pre-trained models to specific tasks or domains without retraining the entire model. Current research heavily emphasizes low-rank adaptation (LoRA) techniques and their variants, often applied to transformer-based models like LLMs and diffusion models, to achieve parameter efficiency and improved performance. This research area is significant because it addresses the computational cost and memory limitations associated with fine-tuning massive models, enabling broader application and deployment of advanced AI systems across diverse tasks and resource-constrained environments. Furthermore, investigations into bias mitigation and improved adaptation strategies within these frameworks are actively pursued.
Papers
Extending LLMs to New Languages: A Case Study of Llama and Persian Adaptation
Samin Mahdizadeh Sani, Pouya Sadeghi, Thuy-Trang Vu, Yadollah Yaghoobzadeh, Gholamreza Haffari
Adaptations of AI models for querying the LandMatrix database in natural language
Fatiha Ait Kbir, Jérémy Bourgoin, Rémy Decoupes, Marie Gradeler, Roberto Interdonato
Train More Parameters But Mind Their Placement: Insights into Language Adaptation with PEFT
Jenny Kunz
Scaling Combinatorial Optimization Neural Improvement Heuristics with Online Search and Adaptation
Federico Julian Camerota Verdù, Lorenzo Castelli, Luca Bortolussi
ASLoRA: Adaptive Sharing Low-Rank Adaptation Across Layers
Junyan Hu, Xue Xiao, Mengqi Zhang, Xiao Chen, Zhaochun Ren, Zhumin Chen, Pengjie Ren