Inconsistent Response
Inconsistent responses from AI models, particularly large language models (LLMs), represent a significant challenge hindering their reliable application. Current research focuses on identifying and mitigating these inconsistencies through techniques like analyzing response consistency across similar inputs, refining model training to align responses with human expectations, and developing methods to identify and filter unreliable information sources within the model's knowledge base. Addressing this issue is crucial for improving the trustworthiness and dependability of AI systems across diverse applications, from question answering and knowledge editing to clinical decision support and drug discovery.
Papers
October 17, 2024
June 5, 2024
April 16, 2024
March 21, 2024
March 18, 2024
February 12, 2024
January 11, 2024
January 4, 2024
December 15, 2023
December 9, 2023