Paper ID: 2503.14258 • Published Mar 18, 2025
JuDGE: Benchmarking Judgment Document Generation for Chinese Legal System
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
This paper introduces JuDGE (Judgment Document Generation Evaluation), a
novel benchmark for evaluating the performance of judgment document generation
in the Chinese legal system. We define the task as generating a complete legal
judgment document from the given factual description of the case. To facilitate
this benchmark, we construct a comprehensive dataset consisting of factual
descriptions from real legal cases, paired with their corresponding full
judgment documents, which serve as the ground truth for evaluating the quality
of generated documents. This dataset is further augmented by two external legal
corpora that provide additional legal knowledge for the task: one comprising
statutes and regulations, and the other consisting of a large collection of
past judgment documents. In collaboration with legal professionals, we
establish a comprehensive automated evaluation framework to assess the quality
of generated judgment documents across various dimensions. We evaluate various
baseline approaches, including few-shot in-context learning, fine-tuning, and a
multi-source retrieval-augmented generation (RAG) approach, using both general
and legal-domain LLMs. The experimental results demonstrate that, while RAG
approaches can effectively improve performance in this task, there is still
substantial room for further improvement. All the codes and datasets are
available at: this https URL
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.