Epistemic Space
Epistemic space research explores how knowledge and belief are represented and revised, particularly within the context of artificial intelligence systems. Current research focuses on developing formal frameworks for understanding belief change, assessing the "knowledge" of large language models, and improving the explainability and reliability of AI through methods like disentangled representation learning and constrained factor graphs. This work is crucial for building more trustworthy and robust AI systems, advancing our understanding of human cognition, and addressing ethical concerns surrounding AI's impact on society.
Papers
Belief in the Machine: Investigating Epistemological Blind Spots of Language Models
Mirac Suzgun, Tayfun Gur, Federico Bianchi, Daniel E. Ho, Thomas Icard, Dan Jurafsky, James Zou
Are LLM-Judges Robust to Expressions of Uncertainty? Investigating the effect of Epistemic Markers on LLM-based Evaluation
Dongryeol Lee, Yerin Hwang, Yongil Kim, Joonsuk Park, Kyomin Jung