Proof Generation

Proof generation focuses on automatically creating formal mathematical proofs, aiming to reduce the significant manual effort required in formal verification and enhance our understanding of automated reasoning. Current research heavily utilizes large language models (LLMs), often employing generate-then-repair strategies or iterative backward reasoning, sometimes augmented by techniques like Monte Carlo tree search or reinforcement learning from proof assistant feedback. These advancements are improving the ability of LLMs to generate correct and efficient proofs, impacting fields like software verification and potentially accelerating mathematical discovery by automating parts of the proof process.

Papers