LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
Interacting Large Language Model Agents. Interpretable Models and Social Learning
Adit Jain, Vikram Krishnamurthy
Covariance-based Space Regularization for Few-shot Class Incremental Learning
Yijie Hu, Guanyu Yang, Zhaorui Tan, Xiaowei Huang, Kaizhu Huang, Qiu-Feng Wang
LEARNER: Learning Granular Labels from Coarse Labels using Contrastive Learning
Gautam Gare, Jana Armouti, Nikhil Madaan, Rohan Panda, Tom Fox, Laura Hutchins, Amita Krishnan, Ricardo Rodriguez, Bennett DeBoisblanc, Deva Ramanan, John Galeotti
Learning in Markov Games with Adaptive Adversaries: Policy Regret, Fundamental Barriers, and Efficient Algorithms
Thanh Nguyen-Tang, Raman Arora
Learning to Look Around: Enhancing Teleoperation and Learning with a Human-like Actuated Neck
Bipasha Sen, Michelle Wang, Nandini Thakur, Aditya Agarwal, Pulkit Agrawal
Enhancing Adaptive Mixed-Criticality Scheduling with Deep Reinforcement Learning
Bruno Mendes (1), Pedro F. Souto (1 and 2), Pedro C. Diniz (2) ((1) Department of Informatics Engineering (DEI) Faculty of Engineering of the University of Porto (FEUP) (2) CISTER Research Centre)
Active Preference-based Learning for Multi-dimensional Personalization
Minhyeon Oh, Seungjoon Lee, Jungseul Ok
CLIP-RT: Learning Language-Conditioned Robotic Policies from Natural Language Supervision
Gi-Cheon Kang, Junghyun Kim, Kyuhwan Shim, Jun Ki Lee, Byoung-Tak Zhang
Adapting While Learning: Grounding LLMs for Scientific Problems with Intelligent Tool Usage Adaptation
Bohan Lyu, Yadi Cao, Duncan Watson-Parris, Leon Bergen, Taylor Berg-Kirkpatrick, Rose Yu
Learning to Rank Salient Content for Query-focused Summarization
Sajad Sotudeh, Nazli Goharian
C2A: Client-Customized Adaptation for Parameter-Efficient Federated Learning
Yeachan Kim, Junho Kim, Wing-Lam Mok, Jun-Hyung Park, SangKeun Lee
Space for Improvement: Navigating the Design Space for Federated Learning in Satellite Constellations
Grace Kim, Luca Powell, Filip Svoboda, Nicholas Lane
Compositional Automata Embeddings for Goal-Conditioned Reinforcement Learning
Beyazit Yalcinkaya, Niklas Lauffer, Marcell Vazquez-Chanlatte, Sanjit A. Seshia
Progressive Safeguards for Safe and Model-Agnostic Reinforcement Learning
Nabil Omi, Hosein Hasanbeig, Hiteshi Sharma, Sriram K. Rajamani, Siddhartha Sen
Interactive proofs for verifying (quantum) learning and testing
Matthias C. Caro, Jens Eisert, Marcel Hinsche, Marios Ioannou, Alexander Nietner, Ryan Sweke
A Non-Monolithic Policy Approach of Offline-to-Online Reinforcement Learning
JaeYoon Kim, Junyu Xuan, Christy Liang, Farookh Hussain
VecCity: A Taxonomy-guided Library for Map Entity Representation Learning
Wentao Zhang, Jingyuan Wang, Yifan Yang, Leong Hou U