Critique Ability
Critique ability in large language models (LLMs) focuses on evaluating their capacity to identify and correct errors in their own reasoning and generated outputs. Current research emphasizes benchmarking this ability across diverse tasks, using metrics beyond simple accuracy to assess aspects like reasoning steps, constraint satisfaction, and handling of complex instructions, often employing techniques like chain-of-thought prompting and self-critique mechanisms. This research is crucial for improving LLM reliability and trustworthiness, impacting fields ranging from automated reasoning and code generation to more nuanced applications requiring robust and explainable AI.
Papers
Granite-Function Calling Model: Introducing Function Calling Abilities via Multi-task Learning of Granular Tasks
Ibrahim Abdelaziz, Kinjal Basu, Mayank Agarwal, Sadhana Kumaravel, Matthew Stallone, Rameswar Panda, Yara Rizk, GP Bhargav, Maxwell Crouse, Chulaka Gunasekara, Shajith Ikbal, Sachin Joshi, Hima Karanam, Vineet Kumar, Asim Munawar, Sumit Neelam, Dinesh Raghu, Udit Sharma, Adriana Meza Soria, Dheeraj Sreedhar, Praveen Venkateswaran, Merve Unuvar, David Cox, Salim Roukos, Luis Lastras, Pavan Kapanipathi
STBench: Assessing the Ability of Large Language Models in Spatio-Temporal Analysis
Wenbin Li, Di Yao, Ruibo Zhao, Wenjie Chen, Zijie Xu, Chengxue Luo, Chang Gong, Quanliang Jing, Haining Tan, Jingping Bi