Heterogeneous Setting
Heterogeneous settings in machine learning address the challenges arising from diverse data distributions and resource limitations across distributed clients or devices. Current research focuses on developing robust federated learning algorithms, often incorporating neural tangent kernels, personalized models, and techniques like asynchronous SGD or federated averaging in random subspaces, to improve model accuracy and efficiency despite data heterogeneity and non-i.i.d. data. These advancements are crucial for enabling privacy-preserving collaborative learning in various applications, including medical imaging, IoT networks, and recommendation systems, where data is inherently decentralized and varied. The ultimate goal is to build effective and efficient machine learning models that can leverage the power of distributed data without compromising privacy or performance.
Papers
Heterogeneous Image-based Classification Using Distributional Data Analysis
Alec Reinhardt, Newsha Nikzad, Raven J. Hollis, Galia Jacobson, Millicent A. Roach, Mohamed Badawy, Peter Chul Park, Laura Beretta, Prasun K Jalal, David T. Fuentes, Eugene J. Koay, Suprateek Kundu
Advancing Graph Neural Networks with HL-HGAT: A Hodge-Laplacian and Attention Mechanism Approach for Heterogeneous Graph-Structured Data
Jinghan Huang, Qiufeng Chen, Yijun Bian, Pengli Zhu, Nanguang Chen, Moo K. Chung, Anqi Qiu