LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
Improving the Generalization of Unseen Crowd Behaviors for Reinforcement Learning based Local Motion Planners
Wen Zheng Terence Ng, Jianda Chen, Sinno Jialin Pan, Tianwei Zhang
When to Trust Your Data: Enhancing Dyna-Style Model-Based Reinforcement Learning With Data Filter
Yansong Li, Zeyu Dong, Ertai Luo, Yu Wu, Shuo Wu, Shuo Han
Data-adaptive Differentially Private Prompt Synthesis for In-Context Learning
Fengyu Gao, Ruida Zhou, Tianhao Wang, Cong Shen, Jing Yang
Learning with Importance Weighted Variational Inference: Asymptotics for Gradient Estimators of the VR-IWAE Bound
Kamélia Daudel, François Roueff
LocoMotion: Learning Motion-Focused Video-Language Representations
Hazel Doughty, Fida Mohammad Thoker, Cees G. M. Snoek
Learning Goal-oriented Bimanual Dough Rolling Using Dynamic Heterogeneous Graph Based on Human Demonstration
Junjia Liu, Chenzui Li, Shixiong Wang, Zhipeng Dong, Sylvain Calinon, Miao Li, Fei Chen
Advanced Persistent Threats (APT) Attribution Using Deep Reinforcement Learning
Animesh Singh Basnet, Mohamed Chahine Ghanem, Dipo Dunsin, Wiktor Sowinski-Mydlarz
Towards More Effective Table-to-Text Generation: Assessing In-Context Learning and Self-Evaluation with Open-Source Models
Sahar Iravani, Tim .O .F Conrad
Reducing Labeling Costs in Sentiment Analysis via Semi-Supervised Learning
Minoo Jafarlou, Mario M. Kubek
Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning
Jingyang Li, Jiachun Pan, Vincent Y. F. Tan, Kim-Chuan Toh, Pan Zhou
Cross-Modal Few-Shot Learning: a Generative Transfer Learning Framework
Zhengwei Yang, Yuke Li, Qiang Sun, Basura Fernando, Heng Huang, Zheng Wang
Neural networks that overcome classic challenges through practice
Kazuki Irie, Brenden M. Lake
Inverse Problems and Data Assimilation: A Machine Learning Approach
Eviatar Bach, Ricardo Baptista, Daniel Sanz-Alonso, Andrew Stuart
Learning via Surrogate PAC-Bayes
Antoine Picard-Weibel, Roman Moscoviz, Benjamin Guedj (UCL, UCL-CS, Inria, Inria-London, MODAL)
KNN Transformer with Pyramid Prompts for Few-Shot Learning
Wenhao Li, Qiangchang Wang, Peng Zhao, Yilong Yin
Physical Consistency Bridges Heterogeneous Data in Molecular Multi-Task Learning
Yuxuan Ren, Dihan Zheng, Chang Liu, Peiran Jin, Yu Shi, Lin Huang, Jiyan He, Shengjie Luo, Tao Qin, Tie-Yan Liu
Real-time Monitoring of Lower Limb Movement Resistance Based on Deep Learning
Buren Batu, Yuanmeng Liu, Tianyi Lyu
Learning to Rank for Multiple Retrieval-Augmented Models through Iterative Utility Maximization
Alireza Salemi, Hamed Zamani
Learning Pattern-Specific Experts for Time Series Forecasting Under Patch-level Distribution Shift
Yanru Sun, Zongxia Xie, Emadeldeen Eldele, Dongyue Chen, Qinghua Hu, Min Wu
SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning
Hojoon Lee, Dongyoon Hwang, Donghu Kim, Hyunseung Kim, Jun Jet Tai, Kaushik Subramanian, Peter R. Wurman, Jaegul Choo, Peter Stone, Takuma Seno