LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
A Similarity Measure Between Functions with Applications to Statistical Learning and Optimization
Chengpiao Huang, Kaizheng Wang
Privacy-Preserving Model and Preprocessing Verification for Machine Learning
Wenbiao Li, Anisa Halimi, Xiaoqian Jiang, Jaideep Vaidya, Erman Ayday
Dynamic Pricing in High-Speed Railways Using Multi-Agent Reinforcement Learning
Enrique Adrian Villarrubia-Martin, Luis Rodriguez-Benitez, David Muñoz-Valero, Giovanni Montana, Luis Jimenez-Linares
SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning
Phillip Rieger, Alessandro Pegoraro, Kavita Kumari, Tigist Abera, Jonathan Knauer, Ahmad-Reza Sadeghi
Enhancing Path Planning Performance through Image Representation Learning of High-Dimensional Configuration Spaces
Jorge Ocampo Jimenez, Wael Suleiman
Language-Inspired Relation Transfer for Few-shot Class-Incremental Learning
Yifan Zhao, Jia Li, Zeyin Song, Yonghong Tian
Annealing Machine-assisted Learning of Graph Neural Network for Combinatorial Optimization
Pablo Loyola, Kento Hasegawa, Andres Hoyos-Idobro, Kazuo Ono, Toyotaro Suzumura, Yu Hirate, Masanao Yamaoka
TAMER: A Test-Time Adaptive MoE-Driven Framework for EHR Representation Learning
Yinghao Zhu, Xiaochen Zheng, Ahmed Allam, Michael Krauthammer
Knowledge Transfer in Model-Based Reinforcement Learning Agents for Efficient Multi-Task Learning
Dmytro Kuzmenko, Nadiya Shvai
Distributed Learning and Inference Systems: A Networking Perspective
Hesham G. Moussa, Arashmid Akhavain, S. Maryam Hosseini, Bill McCormick
Optimizing Estonian TV Subtitles with Semi-supervised Learning and LLMs
Artem Fedorchenko, Tanel Alumäe
Discovering Hidden Visual Concepts Beyond Linguistic Input in Infant Learning
Xueyi Ke, Satoshi Tsutsui, Yayun Zhang, Bihan Wen
TAPFed: Threshold Secure Aggregation for Privacy-Preserving Federated Learning
Runhua Xu, Bo Li, Chao Li, James B.D. Joshi, Shuai Ma, Jianxin Li
Continuous Knowledge-Preserving Decomposition for Few-Shot Continual Learning
Xiaojie Li, Yibo Yang, Jianlong Wu, David A. Clifton, Yue Yu, Bernard Ghanem, Min Zhang
Deep Transfer $Q$-Learning for Offline Non-Stationary Reinforcement Learning
Jinhang Chai, Elynn Chen, Jianqing Fan
Towards a Problem-Oriented Domain Adaptation Framework for Machine Learning
Philipp Spitzer, Dominik Martin, Laurin Eichberger, Niklas Kühl
Lossless Privacy-Preserving Aggregation for Decentralized Federated Learning
Xiaoye Miao, Bin Li, Yangyang Wu, Meng Xi, Xinkui Zhao, Jianwei Yin
VerifBFL: Leveraging zk-SNARKs for A Verifiable Blockchained Federated Learning
Ahmed Ayoub Bellachia, Mouhamed Amine Bouchiha, Yacine Ghamri-Doudane, Mourad Rabah
TADFormer : Task-Adaptive Dynamic Transformer for Efficient Multi-Task Learning
Seungmin Baek, Soyul Lee, Hayeon Jo, Hyesong Choi, Dongbo Min