Plausible Clarification
Plausible clarification research focuses on improving human-computer interaction by addressing ambiguity and uncertainty in communication, particularly in AI systems and human-human dialogue. Current research explores methods for generating clarifying questions, disambiguating user intent (e.g., using conformal prediction or tree-based approaches), and improving the accuracy of intent recognition in dialogue systems. This work is significant for advancing the development of more robust and user-friendly AI systems, as well as for a deeper understanding of human communication and knowledge representation.
Papers
June 13, 2024
March 27, 2024
October 23, 2023
October 11, 2023
September 21, 2023
May 23, 2023
November 27, 2022
October 25, 2022