Conditional Language Model
Conditional language models (CLMs) aim to generate text that satisfies specific conditions or constraints, improving the control and relevance of generated outputs. Current research focuses on enhancing CLM control through techniques like activation steering and distribution matching, addressing issues such as bias mitigation, and improving the robustness of CLMs to out-of-distribution inputs. These advancements are significant for applications ranging from safer text generation and improved text-to-image synthesis to more effective semi-supervised learning and enhanced natural language understanding in various domains.
Papers
Calibrating Sequence likelihood Improves Conditional Language Generation
Yao Zhao, Misha Khalman, Rishabh Joshi, Shashi Narayan, Mohammad Saleh, Peter J. Liu
Out-of-Distribution Detection and Selective Generation for Conditional Language Models
Jie Ren, Jiaming Luo, Yao Zhao, Kundan Krishna, Mohammad Saleh, Balaji Lakshminarayanan, Peter J. Liu