Abstraction Ability
Abstraction ability, the capacity to identify and utilize underlying principles from concrete examples, is a key area of research in artificial intelligence, particularly concerning large language models (LLMs). Current research focuses on developing benchmarks and training methods to improve LLMs' abstraction capabilities, often employing instruction tuning and techniques like concept bottleneck models to extract and utilize abstract concepts from data. These efforts aim to enhance the generalizability and robustness of AI systems across diverse tasks, ultimately leading to more sophisticated and human-like reasoning abilities in various applications.
Papers
March 29, 2024
February 16, 2024
November 15, 2023
November 14, 2023
August 7, 2023
February 23, 2023
June 28, 2022
June 21, 2022