Human Guidance
Human guidance in machine learning aims to improve model performance and reliability by incorporating human expertise into various stages of the learning process, from training data augmentation to inference-time control. Current research focuses on developing effective guidance strategies using diverse methods, including incorporating human feedback into diffusion models, leveraging pretrained encoders for feature extraction and guidance, and designing novel architectures like teacher-student frameworks for knowledge transfer. These advancements have significant implications for various applications, such as medical image analysis, text-to-image generation, and robotics, by enhancing model accuracy, efficiency, and interpretability.
Papers
Opinion-Guided Reinforcement Learning
Kyanna Dagenais, Istvan David
GTA: Generative Trajectory Augmentation with Guidance for Offline Reinforcement Learning
Jaewoo Lee, Sujin Yun, Taeyoung Yun, Jinkyoo Park
PivotMesh: Generic 3D Mesh Generation via Pivot Vertices Guidance
Haohan Weng, Yikai Wang, Tong Zhang, C. L. Philip Chen, Jun Zhu