Contrastive Preference
Contrastive preference optimization (CPO) is a machine learning technique focused on improving model performance by training them to distinguish between preferred and less-preferred outputs based on human feedback or pre-defined quality metrics. Current research explores CPO's application across diverse model architectures, including large language models (LLMs), diffusion models, and vision-language models, often comparing it to supervised fine-tuning and reinforcement learning methods. This approach offers a powerful way to align model behavior with human preferences, leading to improvements in areas like machine translation, image generation, and even the detection of anomalous online behavior, ultimately enhancing the reliability and usability of AI systems.