LLM Output

Large language model (LLM) output research focuses on improving the reliability, consistency, and alignment of LLM-generated text with user intent and factual accuracy. Current efforts concentrate on enhancing decoding strategies through game-theoretic approaches and techniques like attention score manipulation, as well as developing methods for controlling output format and mitigating issues like verbosity and bias through aggregation and calibration. These advancements are crucial for increasing the trustworthiness and practical applicability of LLMs across diverse fields, from translation and code generation to healthcare and finance.

Papers