LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
Learning Multi-Agent Collaborative Manipulation for Long-Horizon Quadrupedal Pushing
Chuye Hong, Yuming Feng, Yaru Niu, Shiqi Liu, Yuxiang Yang, Wenhao Yu, Tingnan Zhang, Jie Tan, Ding Zhao
Multi-Objective Algorithms for Learning Open-Ended Robotic Problems
Martin Robert, Simon Brodeur, Francois Ferland
Computable Model-Independent Bounds for Adversarial Quantum Machine Learning
Bacui Li, Tansu Alpcan, Chandra Thapa, Udaya Parampalli
Learning from Feedback: Semantic Enhancement for Object SLAM Using Foundation Models
Jungseok Hong, Ran Choi, John J. Leonard
Using Diffusion Models as Generative Replay in Continual Federated Learning -- What will Happen?
Yongsheng Mei, Liangqi Yuan, Dong-Jun Han, Kevin S. Chan, Christopher G. Brinton, Tian Lan
Barriers to Complexity-Theoretic Proofs that Achieving AGI Using Machine Learning is Intractable
Michael Guerzhoy
SamRobNODDI: Q-Space Sampling-Augmented Continuous Representation Learning for Robust and Generalized NODDI
Taohui Xiao, Jian Cheng, Wenxin Fan, Enqing Dong, Hairong Zheng, Shanshan Wang
A Variance Minimization Approach to Temporal-Difference Learning
Xingguo Chen, Yu Gong, Shangdong Yang, Wenhao Wang
Layer-Wise Feature Metric of Semantic-Pixel Matching for Few-Shot Learning
Hao Tang, Junhao Lu, Guoheng Huang, Ming Li, Xuhang Chen, Guo Zhong, Zhengguang Tan, Zinuo Li
Activation Map Compression through Tensor Decomposition for Deep Learning
Le-Trung Nguyen, Aël Quélennec, Enzo Tartaglione, Samuel Tardieu, Van-Tam Nguyen
Predicting Stroke through Retinal Graphs and Multimodal Self-supervised Learning
Yuqing Huang, Bastian Wittmann, Olga Demler, Bjoern Menze, Neda Davoudi
Towards Active Flow Control Strategies Through Deep Reinforcement Learning
Ricard Montalà, Bernat Font, Pol Suárez, Jean Rabault, Oriol Lehmkuhl, Ivette Rodriguez
Bridging the Gap between Learning and Inference for Diffusion-Based Molecule Generation
Peidong Liu, Wenbo Zhang, Xue Zhe, Jiancheng Lv, Xianggen Liu
Learning in Budgeted Auctions with Spacing Objectives
Giannis Fikioris, Robert Kleinberg, Yoav Kolumbus, Raunak Kumar, Yishay Mansour, Éva Tardos
Learning from Demonstration with Hierarchical Policy Abstractions Toward High-Performance and Courteous Autonomous Racing
Chanyoung Chung, Hyunki Seong, David Hyunchul Shim
Boosting the Efficiency of Metaheuristics Through Opposition-Based Learning in Optimum Locating of Control Systems in Tall Buildings
Salar Farahmand-Tabar, Sina Shirgir
Constrained Latent Action Policies for Model-Based Offline Reinforcement Learning
Marvin Alles, Philip Becker-Ehmck, Patrick van der Smagt, Maximilian Karl
Hypercube Policy Regularization Framework for Offline Reinforcement Learning
Yi Shen, Hanyan Huang