Generative Model Output
Generative model output research focuses on improving the quality, reliability, and control of outputs from AI systems, particularly large language models (LLMs) and generative transformers. Current research emphasizes evaluating and mitigating biases in outputs, developing more accurate and nuanced evaluation metrics beyond existing automated assessments, and exploring methods to control and constrain generation, such as through retrieval-augmented generation or inference-time interventions. This work is crucial for advancing the responsible development and deployment of generative AI across diverse applications, from scientific idea assessment to creative content generation and even physical object production.
Papers
September 27, 2024
September 7, 2024
July 22, 2024
July 3, 2024
June 6, 2024
May 29, 2024
April 25, 2024
April 17, 2024