Preference Learning
Preference learning aims to align artificial intelligence models, particularly large language models, with human preferences by learning from human feedback on model outputs. Current research focuses on developing efficient algorithms, such as direct preference optimization and reinforcement learning from human feedback, often incorporating advanced model architectures like diffusion models and variational autoencoders to handle complex preference structures, including intransitivity. This field is crucial for building trustworthy and beneficial AI systems, improving their performance on various tasks and ensuring alignment with human values in diverse applications ranging from robotics to natural language processing.
Papers
Understanding the Logic of Direct Preference Alignment through Logic
Kyle Richardson, Vivek Srikumar, Ashish Sabharwal
Towards Intrinsic Self-Correction Enhancement in Monte Carlo Tree Search Boosted Reasoning via Iterative Preference Learning
Huchen Jiang, Yangyang Ma, Chaofan Ding, Kexin Luan, Xinhan Di
Self-Adaptive Paraphrasing and Preference Learning for Improved Claim Verifiability
Amelie Wührl, Roman Klinger
SPaR: Self-Play with Tree-Search Refinement to Improve Instruction-Following in Large Language Models
Jiale Cheng, Xiao Liu, Cunxiang Wang, Xiaotao Gu, Yida Lu, Dan Zhang, Yuxiao Dong, Jie Tang, Hongning Wang, Minlie Huang
Decompose and Leverage Preferences from Expert Models for Improving Trustworthiness of MLLMs
Rui Cao, Yuming Jiang, Michael Schlichtkrull, Andreas Vlachos
DSTC: Direct Preference Learning with Only Self-Generated Tests and Code to Improve Code LMs
Zhihan Liu, Shenao Zhang, Yongfei Liu, Boyi Liu, Yingxiang Yang, Zhaoran Wang