Human Prior

Human priors, representing inherent human knowledge and biases, are increasingly recognized as crucial for improving the efficiency and performance of machine learning models, particularly in complex tasks involving human-centric data like images and language. Current research focuses on integrating these priors into various model architectures, from large language models (LLMs) and diffusion models for image generation to robotic control systems, often through techniques like Bayesian optimization and incorporating human feedback. This work aims to create more efficient, human-like, and robust AI systems by leveraging pre-existing human understanding, impacting fields ranging from computer vision and natural language processing to human-robot interaction.

Papers