Prior Knowledge
Prior knowledge, encompassing pre-existing information and learned experiences, is crucial for efficient and effective learning and decision-making in various fields, from robotics to machine learning. Current research focuses on integrating prior knowledge into models through diverse methods, including incorporating learned priors into variational autoencoders, leveraging large language models to provide contextual information, and designing architectures that explicitly incorporate domain-specific knowledge (e.g., anatomical constraints in 3D hand reconstruction). This research is significant because effectively utilizing prior knowledge improves model performance, reduces data requirements, enhances robustness to noise and domain shifts, and leads to more explainable and reliable AI systems across numerous applications.
Papers
Improving deep learning with prior knowledge and cognitive models: A survey on enhancing explainability, adversarial robustness and zero-shot learning
Fuseinin Mumuni, Alhassan Mumuni
Bayesian Diffusion Models for 3D Shape Reconstruction
Haiyang Xu, Yu Lei, Zeyuan Chen, Xiang Zhang, Yue Zhao, Yilin Wang, Zhuowen Tu
Advancing Generalizable Remote Physiological Measurement through the Integration of Explicit and Implicit Prior Knowledge
Yuting Zhang, Hao Lu, Xin Liu, Yingcong Chen, Kaishun Wu
Prompting Explicit and Implicit Knowledge for Multi-hop Question Answering Based on Human Reading Process
Guangming Huang, Yunfei Long, Cunjin Luo, Jiaxing Shen, Xia Sun
RSAM-Seg: A SAM-based Approach with Prior Knowledge Integration for Remote Sensing Image Semantic Segmentation
Jie Zhang, Xubing Yang, Rui Jiang, Wei Shao, Li Zhang