Centralized Training
Centralized training, a machine learning paradigm, aims to leverage the benefits of centralized data processing for model training while maintaining decentralized execution to preserve data privacy and scalability. Current research focuses on addressing challenges like data heterogeneity and communication efficiency in federated learning settings, often employing techniques such as model aggregation with client clustering or employing diffusion models for data synthesis. This approach is significant for enabling collaborative model training across distributed datasets while mitigating privacy concerns, with applications ranging from medical image analysis and code generation to multi-agent reinforcement learning and network anomaly detection.
Papers
Optimally Solving Simultaneous-Move Dec-POMDPs: The Sequential Central Planning Approach
Johan Peralez, Aurélien Delage, Jacopo Castellini, Rafael F. Cunha, Jilles S. Dibangoye
Improving the Classification Effect of Clinical Images of Diseases for Multi-Source Privacy Protection
Tian Bowen, Xu Zhengyang, Yin Zhihao, Wang Jingying, Yue Yutao