Complex Reasoning
Complex reasoning in artificial intelligence focuses on developing models capable of multi-step, logical inference and problem-solving, mirroring human cognitive abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, retrieval-augmented generation (RAG), and the integration of symbolic reasoning with neural networks, often incorporating multi-modal data (e.g., visual and textual information). These advancements are significant for enhancing the reliability and applicability of AI systems across diverse fields, including autonomous driving, robotics, and scientific discovery, by enabling more robust and accurate decision-making in complex scenarios.
552papers
Papers
May 23, 2025
Bridging Supervised Learning and Reinforcement Learning in Math Reasoning
CXReasonBench: A Benchmark for Evaluating Structured Diagnostic Reasoning in Chest X-rays
Reward Model Generalization for Compute-Aware Test-Time Reasoning
Stepwise Reasoning Checkpoint Analysis: A Test Time Scaling Method to Enhance LLMs' Reasoning
Don't Overthink it. Preferring Shorter Thinking Chains for Improved LLM Reasoning
Rethinking the Sampling Criteria in Reinforcement Learning for LLM Reasoning: A Competence-Difficulty Alignment Perspective
HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning
Controlled Agentic Planning & Reasoning for Mechanism Synthesis
Reasoning Meets Personalization: Unleashing the Potential of Large Reasoning Model for Personalized Generation
On the Design of KL-Regularized Policy Gradient Algorithms for LLM Reasoning
CReSt: A Comprehensive Benchmark for Retrieval-Augmented Generation with Complex Reasoning over Structured Documents
From Reasoning to Generalization: Knowledge-Augmented LLMs for ARC Benchmark
MARCO: Meta-Reflection with Cross-Referencing for Code Reasoning
Self-Training Large Language Models with Confident Reasoning
Language Matters: How Do Multilingual Input and Reasoning Paths Affect Large Reasoning Models?
Misaligning Reasoning with Answers -- A Framework for Assessing LLM CoT Robustness
May 22, 2025