LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
A Variance Minimization Approach to Temporal-Difference Learning
Xingguo Chen, Yu Gong, Shangdong Yang, Wenhao Wang
Layer-Wise Feature Metric of Semantic-Pixel Matching for Few-Shot Learning
Hao Tang, Junhao Lu, Guoheng Huang, Ming Li, Xuhang Chen, Guo Zhong, Zhengguang Tan, Zinuo Li
Activation Map Compression through Tensor Decomposition for Deep Learning
Le-Trung Nguyen, Aël Quélennec, Enzo Tartaglione, Samuel Tardieu, Van-Tam Nguyen
Predicting Stroke through Retinal Graphs and Multimodal Self-supervised Learning
Yuqing Huang, Bastian Wittmann, Olga Demler, Bjoern Menze, Neda Davoudi
Towards Active Flow Control Strategies Through Deep Reinforcement Learning
Ricard Montalà, Bernat Font, Pol Suárez, Jean Rabault, Oriol Lehmkuhl, Ivette Rodriguez
Bridging the Gap between Learning and Inference for Diffusion-Based Molecule Generation
Peidong Liu, Wenbo Zhang, Xue Zhe, Jiancheng Lv, Xianggen Liu
Learning in Budgeted Auctions with Spacing Objectives
Giannis Fikioris, Robert Kleinberg, Yoav Kolumbus, Raunak Kumar, Yishay Mansour, Éva Tardos
Learning from Demonstration with Hierarchical Policy Abstractions Toward High-Performance and Courteous Autonomous Racing
Chanyoung Chung, Hyunki Seong, David Hyunchul Shim
Boosting the Efficiency of Metaheuristics Through Opposition-Based Learning in Optimum Locating of Control Systems in Tall Buildings
Salar Farahmand-Tabar, Sina Shirgir
Constrained Latent Action Policies for Model-Based Offline Reinforcement Learning
Marvin Alles, Philip Becker-Ehmck, Patrick van der Smagt, Maximilian Karl
Hypercube Policy Regularization Framework for Offline Reinforcement Learning
Yi Shen, Hanyan Huang
FedDP: Privacy-preserving method based on federated learning for histopathology image segmentation
Liangrui Pan, Mao Huang, Lian Wang, Pinle Qin, Shaoliang Peng
Quantum Diffusion Models for Few-Shot Learning
Ruhan Wang, Ye Wang, Jing Liu, Toshiaki Koike-Akino
Calibrating for the Future:Enhancing Calorimeter Longevity with Deep Learning
S. Ali, A.S. Ryzhikov, D.A. Derkach, F.D. Ratnikov, V.O. Bocharnikov
Overcoming label shift in targeted federated learning
Edvin Listo Zec, Adam Breitholtz, Fredrik D. Johansson
UnityGraph: Unified Learning of Spatio-temporal features for Multi-person Motion Prediction
Kehua Qu, Rui Ding, Jin Tang
Imagined Potential Games: A Framework for Simulating, Learning and Evaluating Interactive Behaviors
Lingfeng Sun, Yixiao Wang, Pin-Yun Hung, Changhao Wang, Xiang Zhang, Zhuo Xu, Masayoshi Tomizuka
SEGMN: A Structure-Enhanced Graph Matching Network for Graph Similarity Learning
Wenjun Wang, Jiacheng Lu, Kejia Chen, Zheng Liu, Shilong Sang