Cognitive Plausibility
Cognitive plausibility in artificial intelligence assesses how well AI models, particularly large language models (LLMs) and other machine learning systems, mimic human cognitive processes and behavior. Current research focuses on evaluating plausibility through various metrics, including analyzing model outputs against human performance on tasks like sentence plausibility judgments and lexical decision, and examining the alignment of model internal representations with known aspects of human cognition (e.g., processing effort, emotional responses to art). This research is crucial for improving AI transparency, interpretability, and ultimately, building more robust and human-centered AI systems.
Papers
August 29, 2024
May 27, 2024
May 10, 2024
March 21, 2024
February 22, 2024
November 8, 2023
October 20, 2023