Natural Language Feedback
Natural language feedback is revolutionizing the training and refinement of various machine learning models, aiming to improve their accuracy and reliability by incorporating human-like corrections and insights. Current research focuses on leveraging this feedback in diverse ways, including enhancing model performance through iterative prompt adjustments in diffusion models, improving large language model fine-tuning via direct feedback prediction, and guiding the design of entire systems by analyzing aggregated feedback. This approach holds significant promise for creating more robust and human-centered AI systems across numerous applications, from mathematical verification to interactive semantic parsing and improved search engines.
Papers
June 20, 2024
December 31, 2023
December 23, 2023
June 23, 2023