Heterogeneous Setting
Heterogeneous settings in machine learning address the challenges arising from diverse data distributions and resource limitations across distributed clients or devices. Current research focuses on developing robust federated learning algorithms, often incorporating neural tangent kernels, personalized models, and techniques like asynchronous SGD or federated averaging in random subspaces, to improve model accuracy and efficiency despite data heterogeneity and non-i.i.d. data. These advancements are crucial for enabling privacy-preserving collaborative learning in various applications, including medical imaging, IoT networks, and recommendation systems, where data is inherently decentralized and varied. The ultimate goal is to build effective and efficient machine learning models that can leverage the power of distributed data without compromising privacy or performance.
Papers
Federated Learning under Periodic Client Participation and Heterogeneous Data: A New Communication-Efficient Algorithm and Analysis
Michael Crawshaw, Mingrui Liu
$\textbf{EMOS}$: $\textbf{E}$mbodiment-aware Heterogeneous $\textbf{M}$ulti-robot $\textbf{O}$perating $\textbf{S}$ystem with LLM Agents
Junting Chen, Checheng Yu, Xunzhe Zhou, Tianqi Xu, Yao Mu, Mengkang Hu, Wenqi Shao, Yikai Wang, Guohao Li, Lin Shao