Complex Reasoning
Complex reasoning in artificial intelligence focuses on developing models capable of multi-step, logical inference and problem-solving, mirroring human cognitive abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, retrieval-augmented generation (RAG), and the integration of symbolic reasoning with neural networks, often incorporating multi-modal data (e.g., visual and textual information). These advancements are significant for enhancing the reliability and applicability of AI systems across diverse fields, including autonomous driving, robotics, and scientific discovery, by enabling more robust and accurate decision-making in complex scenarios.
552papers
Papers - Page 26
June 27, 2024
June 26, 2024
June 23, 2024
June 20, 2024
June 19, 2024
Neuro-symbolic Training for Reasoning over Spatial Language
Can LLMs Reason in the Wild with Programs?
BEACON: Balancing Convenience and Nutrition in Meals With Long-Term Group Recommendations and Reasoning on Multimodal Recipes
Bridging Law and Data: Augmenting Reasoning via a Semi-Structured Dataset with IRAC methodology
June 18, 2024
Benchmarking Multi-Image Understanding in Vision and Language Models: Perception, Knowledge, Reasoning, and Multi-Hop Reasoning
DART-Math: Difficulty-Aware Rejection Tuning for Mathematical Problem-Solving
Retrieval Meets Reasoning: Dynamic In-Context Editing for Long-Text Understanding
Discussion Graph Semantics of First-Order Logic with Equality for Reasoning about Discussion and Argumentation