Training Framework
Training frameworks encompass the design and implementation of efficient and effective methods for teaching machine learning models. Current research emphasizes optimizing training for diverse scenarios, including resource-constrained environments (federated learning, knowledge distillation), privacy-preserving settings (secure split learning), and handling heterogeneous data (multimodal recommendations, cross-task learning). These advancements aim to improve model accuracy, reduce training time and costs, and address critical issues like data privacy and security, impacting various fields from healthcare (robotic surgery) to computer vision (NeRF) and natural language processing.
Papers
Coaching a Robotic Sonographer: Learning Robotic Ultrasound with Sparse Expert's Feedback
Deepak Raina, Mythra V. Balakuntala, Byung Wook Kim, Juan Wachs, Richard Voyles
$S^2$NeRF: Privacy-preserving Training Framework for NeRF
Bokang Zhang, Yanglin Zhang, Zhikun Zhang, Jinglan Yang, Lingying Huang, Junfeng Wu