LeArning Abstract
Learning, in the context of these papers, encompasses a broad range of research focused on improving the efficiency, robustness, and adaptability of machine learning models across diverse applications. Current efforts concentrate on developing novel self-supervised learning techniques, particularly for structured data like tabular formats, and on leveraging low-rank adaptations for efficient fine-tuning of large language and other foundation models. These advancements are significant because they address key challenges in data efficiency, computational cost, and the generalization capabilities of machine learning systems, impacting fields ranging from personalized medicine to autonomous robotics.
Papers
A Unified Theory of Exact Inference and Learning in Exponential Family Latent Variable Models
Sacha Sokoloski
PEFSL: A deployment Pipeline for Embedded Few-Shot Learning on a FPGA SoC
Lucas Grativol Ribeiro, Lubin Gauthier, Mathieu Leonardon, Jérémy Morlier, Antoine Lavrard-Meyer, Guillaume Muller, Virginie Fresse, Matthieu Arzel
Learning to Communicate Functional States with Nonverbal Expressions for Improved Human-Robot Collaboration
Liam Roy, Dana Kulic, Elizabeth Croft
Learning general Gaussian mixtures with efficient score matching
Sitan Chen, Vasilis Kontonis, Kulin Shah
Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks
Fanghui Liu, Leello Dadi, Volkan Cevher
A Framework for Learning and Reusing Robotic Skills
Brendan Hertel, Nhu Tran, Meriem Elkoudi, Reza Azadeh
Meta-Transfer Derm-Diagnosis: Exploring Few-Shot Learning and Transfer Learning for Skin Disease Classification in Long-Tail Distribution
Zeynep Özdemir, Hacer Yalim Keles, Ömer Özgür Tanrıöver
Learning to Beat ByteRL: Exploitability of Collectible Card Game Agents
Radovan Haluska, Martin Schmid
Neural Assembler: Learning to Generate Fine-Grained Robotic Assembly Instructions from Multi-View Images
Hongyu Yan, Yadong Mu