NLI Model
Natural Language Inference (NLI) models aim to determine the logical relationship (entailment, contradiction, or neutral) between pairs of sentences. Current research emphasizes improving model robustness and addressing biases, particularly through techniques like chain-of-thought prompting, continual learning, and causal effect estimation to understand model reasoning. This work is crucial for enhancing the reliability and trustworthiness of NLP systems across diverse applications, including healthcare and data analysis, where accurate and explainable inferences are paramount.
Papers
October 21, 2024
October 15, 2024
October 11, 2024
June 29, 2024
June 14, 2024
April 7, 2024
April 3, 2024
March 7, 2024
March 4, 2024
February 4, 2024
November 16, 2023
June 21, 2023
May 26, 2023
May 15, 2023
May 12, 2023
April 27, 2023
January 25, 2023
December 21, 2022
December 19, 2022