LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
Learning to Rank Patches for Unbiased Image Redundancy Reduction
Yang Luo, Zhineng Chen, Peng Zhou, Zuxuan Wu, Xieping Gao, Yu-Gang Jiang
Learning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait Animation
Taekyung Ki, Dongchan Min, Gyeongsu Chae
Learning to Plan for Language Modeling from Unlabeled Data
Nathan Cornille, Marie-Francine Moens, Florian Mai
A Theory for Length Generalization in Learning to Reason
Changnan Xiao, Bing Liu
Learning with Unreliability: Fast Few-shot Voxel Radiance Fields with Relative Geometric Consistency
Yingjie Xu, Bangzhen Liu, Hao Tang, Bailin Deng, Shengfeng He
Boosting Few-Shot Learning with Disentangled Self-Supervised Learning and Meta-Learning for Medical Image Classification
Eva Pachetti, Sotirios A. Tsaftaris, Sara Colantonio
Sharing the Cost of Success: A Game for Evaluating and Learning Collaborative Multi-Agent Instruction Giving and Following Policies
Philipp Sadler, Sherzod Hakimov, David Schlangen
Learning to Visually Localize Sound Sources from Mixtures without Prior Source Knowledge
Dongjin Kim, Sung Jin Um, Sangmin Lee, Jung Uk Kim
Advancing Extrapolative Predictions of Material Properties through Learning to Learn
Kohei Noda, Araki Wakiuchi, Yoshihiro Hayashi, Ryo Yoshida
Learning To Guide Human Decision Makers With Vision-Language Models
Debodeep Banerjee, Stefano Teso, Burcu Sayin, Andrea Passerini
Learning from Reduced Labels for Long-Tailed Data
Meng Wei, Zhongnian Li, Yong Zhou, Xinzheng Xu
Learning Action-based Representations Using Invariance
Max Rudolph, Caleb Chuck, Kevin Black, Misha Lvovsky, Scott Niekum, Amy Zhang