Abstract Plausibility
Abstract plausibility research focuses on evaluating and improving the believability of outputs generated by AI models, particularly large language models (LLMs). Current efforts concentrate on developing methods to assess plausibility using probabilistic frameworks, incorporating external knowledge bases, and refining evaluation metrics like log-likelihood scores and adaptations of BLEU scores. This work is crucial for enhancing the trustworthiness and reliability of AI systems across various applications, from improving the interpretability of model explanations to ensuring the safety of autonomous vehicles.
Papers
August 29, 2024
May 27, 2024
April 5, 2024
April 4, 2024
March 21, 2024
March 15, 2024
February 7, 2024
November 8, 2023
July 5, 2023
June 25, 2023
June 20, 2023
May 2, 2023