Reasoning Process
Reasoning processes in artificial intelligence, particularly within large language models (LLMs), are a central focus of current research, aiming to understand and improve how these models solve complex problems. This involves investigating various prompting techniques like chain-of-thought (CoT) and tree-of-thoughts (ToT), exploring different model architectures to enhance reasoning capabilities, and developing methods to evaluate not just the accuracy of answers but also the validity and reliability of the underlying reasoning steps. Understanding and improving LLMs' reasoning abilities is crucial for building more trustworthy and robust AI systems with applications across diverse fields, from legal and medical domains to scientific discovery.
Papers
Assessing the Reasoning Abilities of ChatGPT in the Context of Claim Verification
John Dougrez-Lewis, Mahmud Elahi Akhter, Yulan He, Maria Liakata
Python is Not Always the Best Choice: Embracing Multilingual Program of Thoughts
Xianzhen Luo, Qingfu Zhu, Zhiming Zhang, Libo Qin, Xuanyu Zhang, Qing Yang, Dongliang Xu, Wanxiang Che
Enhancing Numerical Reasoning with the Guidance of Reliable Reasoning Processes
Dingzirui Wang, Longxu Dou, Xuanliang Zhang, Qingfu Zhu, Wanxiang Che