Contrastive Tuning

Contrastive tuning is a technique used to adapt pre-trained large language models (LLMs) and other foundation models to specific downstream tasks, particularly in low-data regimes, by contrasting desired and undesired model behaviors. Current research focuses on applying contrastive tuning to improve various aspects of model performance, including mitigating hallucinations in multimodal LLMs, enhancing few-shot class-incremental learning, and improving efficiency in adapting masked autoencoders to new tasks. This approach offers a powerful method for efficiently fine-tuning large models, reducing the need for extensive labeled data and improving their robustness and accuracy across diverse applications.

Papers