LLM Output
Large language model (LLM) output research focuses on improving the reliability, consistency, and alignment of LLM-generated text with user intent and factual accuracy. Current efforts concentrate on enhancing decoding strategies through game-theoretic approaches and techniques like attention score manipulation, as well as developing methods for controlling output format and mitigating issues like verbosity and bias through aggregation and calibration. These advancements are crucial for increasing the trustworthiness and practical applicability of LLMs across diverse fields, from translation and code generation to healthcare and finance.
Papers
Planning In Natural Language Improves LLM Search For Code Generation
Evan Wang, Federico Cassano, Catherine Wu, Yunfeng Bai, Will Song, Vaskar Nath, Ziwen Han, Sean Hendryx, Summer Yue, Hugh Zhang
Sketch: A Toolkit for Streamlining LLM Operations
Xin Jiang, Xiang Li, Wenjia Ma, Xuezhi Fang, Yiqun Yao, Naitong Yu, Xuying Meng, Peng Han, Jing Li, Aixin Sun, Yequan Wang