LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
RA-RLHF: Provably Efficient Risk-Aware Reinforcement Learning Human Feedback
Yujie Zhao, Jose Efraim Aguilar Escamill, Weyl Lu, Huazheng Wang
LEAF: Learning and Evaluation Augmented by Fact-Checking to Improve Factualness in Large Language Models
Hieu Tran, Junda Wang, Yujan Ting, Weijing Huang, Terrence Chen
Kernel-Based Function Approximation for Average Reward Reinforcement Learning: An Optimist No-Regret Algorithm
Sattar Vakili, Julia Olkhovskaya
Return Augmented Decision Transformer for Off-Dynamics Reinforcement Learning
Ruhan Wang, Yu Yang, Zhishuai Liu, Dongruo Zhou, Pan Xu
Learning and Transferring Sparse Contextual Bigrams with Linear Transformers
Yunwei Ren, Zixuan Wang, Jason D. Lee
eDOC: Explainable Decoding Out-of-domain Cell Types with Evidential Learning
Chaochen Wu, Meiyun Zuo, Lei Xie
Learning for Deformable Linear Object Insertion Leveraging Flexibility Estimation from Visual Cues
Mingen Li, Changhyun Choi
On the Optimality of Dilated Entropy and Lower Bounds for Online Learning in Extensive-Form Games
Zhiyuan Fan, Christian Kroer, Gabriele Farina
Keypoint Abstraction using Large Models for Object-Relative Imitation Learning
Xiaolin Fang, Bo-Ruei Huang, Jiayuan Mao, Jasmine Shone, Joshua B. Tenenbaum, Tomás Lozano-Pérez, Leslie Pack Kaelbling
Planning and Learning in Risk-Aware Restless Multi-Arm Bandit Problem
Nima Akbarzadeh, Erick Delage, Yossiri Adulyasak
Explainable Behavior Cloning: Teaching Large Language Model Agents through Learning by Demonstration
Yanchu Guan, Dong Wang, Yan Wang, Haiqing Wang, Renen Sun, Chenyi Zhuang, Jinjie Gu, Zhixuan Chu
DOA-Aware Audio-Visual Self-Supervised Learning for Sound Event Localization and Detection
Yoto Fujita, Yoshiaki Bando, Keisuke Imoto, Masaki Onishi, Kazuyoshi Yoshii
SoftCTRL: Soft conservative KL-control of Transformer Reinforcement Learning for Autonomous Driving
Minh Tri Huynh, Duc Dung Nguyen
Calibrating Practical Privacy Risks for Differentially Private Machine Learning
Yuechun Gu, Keke Chen
Learning and Unlearning of Fabricated Knowledge in Language Models
Chen Sun, Nolan Andrew Miller, Andrey Zhmoginov, Max Vladymyrov, Mark Sandler
BF-Meta: Secure Blockchain-enhanced Privacy-preserving Federated Learning for Metaverse
Wenbo Liu, Handi Chen, Edith C.H. Ngai
Comment on Is Complexity an Illusion?
Gabriel Simmons
Revisiting Multi-Granularity Representation via Group Contrastive Learning for Unsupervised Vehicle Re-identification
Zhigang Chang, Shibao Zheng
SeriesGAN: Time Series Generation via Adversarial and Autoregressive Learning
MohammadReza EskandariNasab, Shah Muhammad Hamdi, Soukaina Filali Boubrahimi
A Unified Solution to Diverse Heterogeneities in One-shot Federated Learning
Jun Bai, Yiliao Song, Di Wu, Atul Sajjanhar, Yong Xiang, Wei Zhou, Xiaohui Tao, Yan Li