Core Knowledge
Core knowledge research focuses on identifying and transferring essential information within complex systems, aiming to improve efficiency and performance in various applications. Current efforts concentrate on leveraging large language models and other deep learning architectures to extract and transfer this core knowledge, often employing techniques like knowledge distillation and instruction tuning to enhance smaller models or improve the reasoning capabilities of larger ones. This research is significant because efficiently transferring core knowledge can lead to more effective and resource-efficient AI systems, impacting fields ranging from image processing and natural language understanding to optimization problems in engineering and scheduling. The development of robust methods for identifying and transferring core knowledge is crucial for advancing the capabilities and trustworthiness of AI.
Papers
Towards KAB2S: Learning Key Knowledge from Single-Objective Problems to Multi-Objective Problem
Xu Wendi, Wang Xianpeng, Guo Qingxin, Song Xiangman, Zhao Ren, Zhao Guodong, Yang Yang, Xu Te, He Dakuo
ETO Meets Scheduling: Learning Key Knowledge from Single-Objective Problems to Multi-Objective Problem
Wendi Xu, Xianpeng Wang