LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
Factored Task and Motion Planning with Combined Optimization, Sampling and Learning
Joaquim Ortiz-Haro
Learning From Simplicial Data Based on Random Walks and 1D Convolutions
Florian Frantzen, Michael T. Schaub
Learning to Plan and Generate Text with Citations
Constanza Fierro, Reinald Kim Amplayo, Fantine Huot, Nicola De Cao, Joshua Maynez, Shashi Narayan, Mirella Lapata
Learning from Demonstration Framework for Multi-Robot Systems Using Interaction Keypoints and Soft Actor-Critic Methods
Vishnunandan L. N. Venkatesh, Byung-Cheol Min
Is Meta-training Really Necessary for Molecular Few-Shot Learning ?
Philippe Formont, Hugo Jeannin, Pablo Piantanida, Ismail Ben Ayed
Automatic Derivation of an Optimal Task Frame for Learning and Controlling Contact-Rich Tasks
Ali Mousavi Mohammadi, Maxim Vochten, Erwin Aertbeliën, Joris De Schutter
Contextual Embedding Learning to Enhance 2D Networks for Volumetric Image Segmentation
Zhuoyuan Wang, Dong Sun, Xiangyun Zeng, Ruodai Wu, Yi Wang
Learning to Control Camera Exposure via Reinforcement Learning
Kyunghyun Lee, Ukcheol Shin, Byeong-Uk Lee
Learning to Rank Patches for Unbiased Image Redundancy Reduction
Yang Luo, Zhineng Chen, Peng Zhou, Zuxuan Wu, Xieping Gao, Yu-Gang Jiang
Learning to Generate Conditional Tri-plane for 3D-aware Expression Controllable Portrait Animation
Taekyung Ki, Dongchan Min, Gyeongsu Chae
Learning to Plan for Language Modeling from Unlabeled Data
Nathan Cornille, Marie-Francine Moens, Florian Mai
A Theory for Length Generalization in Learning to Reason
Changnan Xiao, Bing Liu