LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
Explainable AI in Handwriting Detection for Dyslexia Using Transfer Learning
Mahmoud Robaa, Mazen Balat, Rewaa Awaad, Esraa Omar, Salah A. Aly
Fine-Tuning Pre-trained Language Models for Robust Causal Representation Learning
Jialin Yu, Yuxiang Zhou, Yulan He, Nevin L. Zhang, Ricardo Silva
Learning autonomous driving from aerial imagery
Varun Murali, Guy Rosman, Sertac Karaman, Daniela Rus
Diffusing States and Matching Scores: A New Framework for Imitation Learning
Runzhe Wu, Yiding Chen, Gokul Swamy, Kianté Brantley, Wen Sun
Enhancing Text Generation in Joint NLG/NLU Learning Through Curriculum Learning, Semi-Supervised Training, and Advanced Optimization Techniques
Rahimanuddin Shaik, Katikela Sreeharsha Kishore
Learning to Route with Confidence Tokens
Yu-Neng Chuang, Helen Zhou, Prathusha Kameswara Sarma, Parikshit Gopalan, John Boccio, Sara Bolouki, Xia Hu
From PINNs to PIKANs: Recent Advances in Physics-Informed Machine Learning
Juan Diego Toscano, Vivek Oommen, Alan John Varghese, Zongren Zou, Nazanin Ahmadi Daryakenari, Chenxi Wu, George Em Karniadakis
Investigating Effective Speaker Property Privacy Protection in Federated Learning for Speech Emotion Recognition
Chao Tan, Sheng Li, Yang Cao, Zhao Ren, Tanja Schultz
Geometry-Aware Generative Autoencoders for Warped Riemannian Metric Learning and Generative Modeling on Data Manifolds
Xingzhi Sun, Danqi Liao, Kincaid MacDonald, Yanlei Zhang, Chen Liu, Guillaume Huguet, Guy Wolf, Ian Adelstein, Tim G. J. Rudner, Smita Krishnaswamy
Learning to Predict Usage Options of Product Reviews with LLM-Generated Labels
Leo Kohlenberg, Leonard Horns, Frederic Sadrieh, Nils Kiele, Matthis Clausen, Konstantin Ketterer, Avetis Navasardyan, Tamara Czinczoll, Gerard de Melo, Ralf Herbrich
Improving the Generalization of Unseen Crowd Behaviors for Reinforcement Learning based Local Motion Planners
Wen Zheng Terence Ng, Jianda Chen, Sinno Jialin Pan, Tianwei Zhang
When to Trust Your Data: Enhancing Dyna-Style Model-Based Reinforcement Learning With Data Filter
Yansong Li, Zeyu Dong, Ertai Luo, Yu Wu, Shuo Wu, Shuo Han
Data-adaptive Differentially Private Prompt Synthesis for In-Context Learning
Fengyu Gao, Ruida Zhou, Tianhao Wang, Cong Shen, Jing Yang
Learning with Importance Weighted Variational Inference: Asymptotics for Gradient Estimators of the VR-IWAE Bound
Kamélia Daudel, François Roueff
LocoMotion: Learning Motion-Focused Video-Language Representations
Hazel Doughty, Fida Mohammad Thoker, Cees G. M. Snoek
Learning Goal-oriented Bimanual Dough Rolling Using Dynamic Heterogeneous Graph Based on Human Demonstration
Junjia Liu, Chenzui Li, Shixiong Wang, Zhipeng Dong, Sylvain Calinon, Miao Li, Fei Chen
Advanced Persistent Threats (APT) Attribution Using Deep Reinforcement Learning
Animesh Singh Basnet, Mohamed Chahine Ghanem, Dipo Dunsin, Wiktor Sowinski-Mydlarz
Towards More Effective Table-to-Text Generation: Assessing In-Context Learning and Self-Evaluation with Open-Source Models
Sahar Iravani, Tim .O .F Conrad
Reducing Labeling Costs in Sentiment Analysis via Semi-Supervised Learning
Minoo Jafarlou, Mario M. Kubek
Towards Understanding Why FixMatch Generalizes Better Than Supervised Learning
Jingyang Li, Jiachun Pan, Vincent Y. F. Tan, Kim-Chuan Toh, Pan Zhou