LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
Learning to Stabilize Unknown LTI Systems on a Single Trajectory under Stochastic Noise
Ziyi Zhang, Yorie Nakahira, Guannan Qu
Learning to Clarify: Multi-turn Conversations with Action-Based Contrastive Self-Training
Maximillian Chen, Ruoxi Sun, Sercan Ö. Arık, Tomas Pfister
Learning to Estimate System Specifications in Linear Temporal Logic using Transformers and Mamba
İlker Işık, Ebru Aydin Gol, Ramazan Gokberk Cinbis
An iterated learning model of language change that mixes supervised and unsupervised learning
Jack Bunyan, Seth Bullock, Conor Houghton
Learning on Large Graphs using Intersecting Communities
Ben Finkelshtein, İsmail İlkan Ceylan, Michael Bronstein, Ron Levie
Provably Efficient Interactive-Grounded Learning with Personalized Reward
Mengxiao Zhang, Yuheng Zhang, Haipeng Luo, Paul Mineiro
Sign is Not a Remedy: Multiset-to-Multiset Message Passing for Learning on Heterophilic Graphs
Langzhang Liang, Sunwoo Kim, Kijung Shin, Zenglin Xu, Shirui Pan, Yuan Qi
Identifying while Learning for Document Event Causality Identification
Cheng Liu, Wei Xiang, Bang Wang
Searching for internal symbols underlying deep learning
Jung H. Lee, Sujith Vijayan
SPAM: Stochastic Proximal Point Method with Momentum Variance Reduction for Non-convex Cross-Device Federated Learning
Avetik Karagulyan, Egor Shulgin, Abdurakhmon Sadiev, Peter Richtárik
Learning to Discuss Strategically: A Case Study on One Night Ultimate Werewolf
Xuanfa Jin, Ziyan Wang, Yali Du, Meng Fang, Haifeng Zhang, Jun Wang
Multimodal Cross-Domain Few-Shot Learning for Egocentric Action Recognition
Masashi Hatano, Ryo Hachiuma, Ryo Fujii, Hideo Saito
Learning from Random Demonstrations: Offline Reinforcement Learning with Importance-Sampled Diffusion Models
Zeyu Fang, Tian Lan
Learning to Recover from Plan Execution Errors during Robot Manipulation: A Neuro-symbolic Approach
Namasivayam Kalithasan, Arnav Tuli, Vishal Bindal, Himanshu Gaurav Singh, Parag Singla, Rohan Paul
Learning Mixture-of-Experts for General-Purpose Black-Box Discrete Optimization
Shengcai Liu, Zhiyuan Wang, Yew-Soon Ong, Xin Yao, Ke Tang
Learning to Continually Learn with the Bayesian Principle
Soochan Lee, Hyeonseong Jeon, Jaehyeon Son, Gunhee Kim
Vim-F: Visual State Space Model Benefiting from Learning in the Frequency Domain
Juntao Zhang, Kun Bian, Peng Cheng, Wenbo An, Jianning Liu, Jun Zhou