Self Feedback
Self-feedback, the process of a system evaluating and improving its own performance, is a rapidly developing area of research, particularly within the context of large language models (LLMs). Current research focuses on using self-feedback to enhance various aspects of LLM functionality, including improving reasoning, reducing hallucinations, and enhancing the reliability of instruction following, often employing techniques like actor-critic methods, self-training frameworks, and iterative self-evaluation loops. These advancements are significant because they aim to reduce reliance on expensive human feedback, improve model accuracy and robustness, and ultimately lead to more reliable and efficient AI systems across diverse applications.
Papers
Mastering the ABCDs of Complex Questions: Answer-Based Claim Decomposition for Fine-grained Self-Evaluation
Nishant Balepur, Jie Huang, Samraj Moorjani, Hari Sundaram, Kevin Chen-Chuan Chang
Have Large Language Models Developed a Personality?: Applicability of Self-Assessment Tests in Measuring Personality in LLMs
Xiaoyang Song, Akshat Gupta, Kiyan Mohebbizadeh, Shujie Hu, Anant Singh