Knowledge Injection
Knowledge injection aims to enhance large language models (LLMs) by incorporating external knowledge, improving their performance on specific tasks and reducing hallucinations. Current research focuses on optimizing knowledge injection strategies, including selective injection into specific LLM layers (e.g., prioritizing shallow layers), leveraging retrieval-augmented generation (RAG) and knowledge graphs, and employing various fine-tuning and prompt engineering techniques. This field is significant because it addresses LLMs' limitations in handling domain-specific knowledge and factual accuracy, leading to improved performance in applications ranging from medical diagnosis to financial analysis and beyond.
Papers
FEDKIM: Adaptive Federated Knowledge Injection into Medical Foundation Models
Xiaochen Wang, Jiaqi Wang, Houping Xiao, Jinghui Chen, Fenglong Ma
FEDMEKI: A Benchmark for Scaling Medical Foundation Models via Federated Knowledge Injection
Jiaqi Wang, Xiaochen Wang, Lingjuan Lyu, Jinghui Chen, Fenglong Ma