Generative Model Output

Generative model output research focuses on improving the quality, reliability, and control of outputs from AI systems, particularly large language models (LLMs) and generative transformers. Current research emphasizes evaluating and mitigating biases in outputs, developing more accurate and nuanced evaluation metrics beyond existing automated assessments, and exploring methods to control and constrain generation, such as through retrieval-augmented generation or inference-time interventions. This work is crucial for advancing the responsible development and deployment of generative AI across diverse applications, from scientific idea assessment to creative content generation and even physical object production.

Papers