LLM Alignment
LLM alignment focuses on aligning large language models' behavior with human values and preferences, aiming to mitigate harmful outputs like biases, misinformation, and unsafe instructions. Current research emphasizes developing more efficient and robust alignment techniques, including methods like Direct Preference Optimization (DPO) and Proximal Policy Optimization (PPO), often incorporating personalized preferences and addressing the unreliability of human feedback. This field is crucial for ensuring the safe and beneficial deployment of LLMs, impacting both the development of more trustworthy AI systems and the broader societal implications of advanced language technologies.
Papers
AlphaPO -- Reward shape matters for LLM alignment
Aman Gupta, Shao Tang, Qingquan Song, Sirou Zhu, Jiwoo Hong, Ankan Saha, Viral Gupta, Noah Lee, Eunki Kim, Jason Zhu, Natesh Pillai, S. Sathiya Keerthi
Align-Pro: A Principled Approach to Prompt Optimization for LLM Alignment
Prashant Trivedi, Souradip Chakraborty, Avinash Reddy, Vaneet Aggarwal, Amrit Singh Bedi, George K. Atia