Legal Reasoning
Legal reasoning research focuses on developing computational models that can understand and emulate human legal reasoning, aiming to improve efficiency and fairness in legal processes. Current research heavily utilizes large language models (LLMs), often enhanced with retrieval-augmented generation (RAG) or fine-tuned on specialized legal datasets, to perform tasks like legal judgment prediction, question answering, and argumentation analysis. These advancements are significant because they offer the potential to automate time-consuming legal tasks, improve access to justice, and provide more transparent and consistent legal decision-making.
Papers
Measuring the Groundedness of Legal Question-Answering Systems
Dietrich Trautmann, Natalia Ostapuk, Quentin Grail, Adrian Alan Pol, Guglielmo Bonifazi, Shang Gao, Martin Gajek
Developing a Pragmatic Benchmark for Assessing Korean Legal Language Understanding in Large Language Models
Yeeun Kim, Young Rok Choi, Eunkyung Choi, Jinhwan Choi, Hai Jin Park, Wonseok Hwang
FLawN-T5: An Empirical Examination of Effective Instruction-Tuning Data Mixtures for Legal Reasoning
Joel Niklaus, Lucia Zheng, Arya D. McCarthy, Christopher Hahn, Brian M. Rosen, Peter Henderson, Daniel E. Ho, Garrett Honke, Percy Liang, Christopher Manning
Team UTSA-NLP at SemEval 2024 Task 5: Prompt Ensembling for Argument Reasoning in Civil Procedures with GPT4
Dan Schumacher, Anthony Rios