Code Benchmark
Code benchmarks are standardized evaluations assessing the code generation and reasoning capabilities of large language models (LLMs). Current research focuses on creating more comprehensive benchmarks that address limitations in existing datasets, such as language bias, task diversity, and the evaluation of code efficiency and robustness beyond simple functional correctness. These efforts involve developing automated benchmark construction pipelines and novel evaluation metrics, often incorporating execution-based verification and multi-dimensional assessments. Improved benchmarks are crucial for advancing LLM development and ensuring the reliability of AI-generated code in real-world applications.
Papers
DOMAINEVAL: An Auto-Constructed Benchmark for Multi-Domain Code Generation
Qiming Zhu, Jialun Cao, Yaojie Lu, Hongyu Lin, Xianpei Han, Le Sun, Shing-Chi Cheung
CRUXEval-X: A Benchmark for Multilingual Code Reasoning, Understanding and Execution
Ruiyang Xu, Jialun Cao, Yaojie Lu, Hongyu Lin, Xianpei Han, Ben He, Shing-Chi Cheung, Le Sun
Enhancing Out-of-Vocabulary Performance of Indian TTS Systems for Practical Applications through Low-Effort Data Strategies
Srija Anand, Praveen Srinivasa Varadhan, Ashwin Sankar, Giri Raju, Mitesh M. Khapra
SciCode: A Research Coding Benchmark Curated by Scientists
Minyang Tian, Luyu Gao, Shizhuo Dylan Zhang, Xinan Chen, Cunwei Fan, Xuefei Guo, Roland Haas, Pan Ji, Kittithat Krongchon, Yao Li, Shengyan Liu, Di Luo, Yutao Ma, Hao Tong, Kha Trinh, Chenyu Tian, Zihan Wang, Bohao Wu, Yanyu Xiong, Shengzhu Yin, Minhui Zhu, Kilian Lieret, Yanxin Lu, Genglin Liu, Yufeng Du, Tianhua Tao, Ofir Press, Jamie Callan, Eliu Huerta, Hao Peng