Commonsense Reasoning Task
Commonsense reasoning tasks aim to equip artificial intelligence with the ability to understand and reason about everyday situations, a crucial step towards creating more human-like AI. Current research focuses on evaluating and improving the commonsense reasoning capabilities of large language models (LLMs) and multimodal LLMs using various benchmarks and prompting techniques, including chain-of-thought prompting, contrastive prompting, and methods that leverage knowledge graphs or tree-based preference learning. These efforts are significant because advancements in commonsense reasoning are essential for building more robust and reliable AI systems across numerous applications, from question answering to decision-making in complex scenarios. The field is actively exploring ways to move beyond superficial statistical correlations towards genuine understanding and reasoning.
Papers
ByteSized32: A Corpus and Challenge Task for Generating Task-Specific World Models Expressed as Text Games
Ruoyao Wang, Graham Todd, Eric Yuan, Ziang Xiao, Marc-Alexandre Côté, Peter Jansen
Large Language Models are In-Context Semantic Reasoners rather than Symbolic Reasoners
Xiaojuan Tang, Zilong Zheng, Jiaqi Li, Fanxu Meng, Song-Chun Zhu, Yitao Liang, Muhan Zhang
JECC: Commonsense Reasoning Tasks Derived from Interactive Fictions
Mo Yu, Yi Gu, Xiaoxiao Guo, Yufei Feng, Xiaodan Zhu, Michael Greenspan, Murray Campbell, Chuang Gan
SafeText: A Benchmark for Exploring Physical Safety in Language Models
Sharon Levy, Emily Allaway, Melanie Subbiah, Lydia Chilton, Desmond Patton, Kathleen McKeown, William Yang Wang
PseudoReasoner: Leveraging Pseudo Labels for Commonsense Knowledge Base Population
Tianqing Fang, Quyet V. Do, Hongming Zhang, Yangqiu Song, Ginny Y. Wong, Simon See
MICO: A Multi-alternative Contrastive Learning Framework for Commonsense Knowledge Representation
Ying Su, Zihao Wang, Tianqing Fang, Hongming Zhang, Yangqiu Song, Tong Zhang