Cognitive Plausibility

Cognitive plausibility in artificial intelligence assesses how well AI models, particularly large language models (LLMs) and other machine learning systems, mimic human cognitive processes and behavior. Current research focuses on evaluating plausibility through various metrics, including analyzing model outputs against human performance on tasks like sentence plausibility judgments and lexical decision, and examining the alignment of model internal representations with known aspects of human cognition (e.g., processing effort, emotional responses to art). This research is crucial for improving AI transparency, interpretability, and ultimately, building more robust and human-centered AI systems.

Papers