Preference Optimization
Preference optimization (PO) aims to align large language models (LLMs) and other AI systems with human preferences, improving their behavior and outputs. Current research focuses on refining existing algorithms like Direct Preference Optimization (DPO) and its variants, exploring techniques such as sparse token weighting, bidirectional feedback, and incorporating uncertainty estimates to improve efficiency and robustness. This field is crucial for building safer and more beneficial AI systems, impacting both the development of more reliable models and the ethical considerations surrounding their deployment.
Papers
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization
Chia-Yu Hung, Navonil Majumder, Zhifeng Kong, Ambuj Mehrish, Rafael Valle, Bryan Catanzaro, Soujanya Poria
Plug-and-Play Training Framework for Preference Optimization
Jingyuan Ma, Rui Li, Zheng Li, Lei Sha, Zhifang Sui
Hybrid Preference Optimization for Alignment: Provably Faster Convergence Rates by Combining Offline Preferences with Online Exploration
Avinandan Bose, Zhihan Xiong, Aadirupa Saha, Simon Shaolei Du, Maryam Fazel
MPPO: Multi Pair-wise Preference Optimization for LLMs with Arbitrary Negative Samples
Shuo Xie, Fangzhi Zhu, Jiahui Wang, Lulu Wen, Wei Dai, Xiaowei Chen, Junxiong Zhu, Kai Zhou, Bo Zheng
SWEPO: Simultaneous Weighted Preference Optimization for Group Contrastive Alignment
Taneesh Gupta, Rahul Madhavan, Xuchao Zhang, Chetan Bansal, Saravan Rajmohan
Reinforcement Learning Enhanced LLMs: A Survey
Shuhe Wang, Shengyu Zhang, Jie Zhang, Runyi Hu, Xiaoya Li, Tianwei Zhang, Jiwei Li, Fei Wu, Guoyin Wang, Eduard Hovy