Proof Generation
Proof generation focuses on automatically creating formal mathematical proofs, aiming to reduce the significant manual effort required in formal verification and enhance our understanding of automated reasoning. Current research heavily utilizes large language models (LLMs), often employing generate-then-repair strategies or iterative backward reasoning, sometimes augmented by techniques like Monte Carlo tree search or reinforcement learning from proof assistant feedback. These advancements are improving the ability of LLMs to generate correct and efficient proofs, impacting fields like software verification and potentially accelerating mathematical discovery by automating parts of the proof process.
Papers
Proving Theorems Recursively
Haiming Wang, Huajian Xin, Zhengying Liu, Wenda Li, Yinya Huang, Jianqiao Lu, Zhicheng Yang, Jing Tang, Jian Yin, Zhenguo Li, Xiaodan Liang
DeepSeek-Prover: Advancing Theorem Proving in LLMs through Large-Scale Synthetic Data
Huajian Xin, Daya Guo, Zhihong Shao, Zhizhou Ren, Qihao Zhu, Bo Liu, Chong Ruan, Wenda Li, Xiaodan Liang