Theorem Proving
Theorem proving, the automated generation and verification of mathematical proofs, aims to leverage computational power to advance mathematical discovery and verification. Current research heavily utilizes large language models (LLMs) within various theorem proving environments (e.g., Lean, Isabelle, Coq), focusing on improving proof generation accuracy through techniques like subgoal-based learning, data augmentation (including synthetic data generation), and enhanced prompt engineering, often incorporating retrieval-augmented methods and multi-agent systems. This field is significant for its potential to automate complex mathematical reasoning, accelerate scientific discovery, and improve the reliability of software verification.
Papers
Pantograph: A Machine-to-Machine Interaction Interface for Advanced Theorem Proving, High Level Reasoning, and Data Extraction in Lean 4
Leni Aniva, Chuyue Sun, Brando Miranda, Clark Barrett, Sanmi Koyejo
Automated Proof Generation for Rust Code via Self-Evolution
Tianyu Chen, Shuai Lu, Shan Lu, Yeyun Gong, Chenyuan Yang, Xuheng Li, Md Rakib Hossain Misu, Hao Yu, Nan Duan, Peng Cheng, Fan Yang, Shuvendu K Lahiri, Tao Xie, Lidong Zhou
Alchemy: Amplifying Theorem-Proving Capability through Symbolic Mutation
Shaonan Wu, Shuai Lu, Yeyun Gong, Nan Duan, Ping Wei
InternLM2.5-StepProver: Advancing Automated Theorem Proving via Expert Iteration on Large-Scale LEAN Problems
Zijian Wu, Suozhi Huang, Zhejian Zhou, Huaiyuan Ying, Jiayu Wang, Dahua Lin, Kai Chen
Proving Theorems Recursively
Haiming Wang, Huajian Xin, Zhengying Liu, Wenda Li, Yinya Huang, Jianqiao Lu, Zhicheng Yang, Jing Tang, Jian Yin, Zhenguo Li, Xiaodan Liang
DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data
Huajian Xin, Daya Guo, Zhihong Shao, Zhizhou Ren, Qihao Zhu, Bo Liu, Chong Ruan, Wenda Li, Xiaodan Liang