LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
SmartPretrain: Model-Agnostic and Dataset-Agnostic Representation Learning for Motion Prediction
Yang Zhou, Hao Shao, Letian Wang, Steven L. Waslander, Hongsheng Li, Yu Liu
Score Neural Operator: A Generative Model for Learning and Generalizing Across Multiple Probability Distributions
Xinyu Liao, Aoyang Qin, Jacob Seidman, Junqi Wang, Wei Wang, Paris Perdikaris
Slow Convergence of Interacting Kalman Filters in Word-of-Mouth Social Learning
Vikram Krishnamurthy, Cristian Rojas
Learning to Balance: Diverse Normalization for Cloth-Changing Person Re-Identification
Hongjun Wang, Jiyuan Chen, Zhengwei Yin, Xuan Song, Yinqiang Zheng
Learning Cortico-Muscular Dependence through Orthonormal Decomposition of Density Ratios
Shihan Ma, Bo Hu, Tianyu Jia, Alexander Kenneth Clarke, Blanka Zicher, Arnault H. Caillet, Dario Farina, Jose C. Principe
Learning to steer with Brownian noise
Stefan Ankirchner, Sören Christensen, Jan Kallsen, Philip Le Borne, Stefan Perko
Inverse Entropic Optimal Transport Solves Semi-supervised Learning via Data Likelihood Maximization
Mikhail Persiianov, Arip Asadulaev, Nikita Andreev, Nikita Starodubcev, Dmitry Baranchuk, Anastasis Kratsios, Evgeny Burnaev, Alexander Korotin
Learning from Offline Foundation Features with Tensor Augmentations
Emir Konuk, Christos Matsoukas, Moein Sorkhei, Phitchapha Lertsiravaramet, Kevin Smith
SGW-based Multi-Task Learning in Vision Tasks
Ruiyuan Zhang, Yuyao Chen, Yuchi Huo, Jiaxiang Liu, Dianbing Xi, Jie Liu, Chao Wu
Learning K-U-Net with constant complexity: An Application to time series forecasting
Jiang You, Arben Cela, René Natowicz, Jacob Ouanounou, Patrick Siarry
Distributed Learning with Discretely Observed Functional Data
Jiading Liu, Lei Shi