Formality Transfer
Formality transfer, in its broadest sense, aims to leverage knowledge learned in one context (the "source") to improve performance in a related but different context (the "target"). Current research focuses on adapting this concept across diverse domains, employing techniques like transfer learning within neural networks (including transformers and convolutional neural networks), multi-armed bandit algorithms, and knowledge distillation. This research is significant because it addresses the challenge of data scarcity in many applications, improving efficiency and performance in areas such as financial prediction, robotic manipulation, and natural language processing.
Papers
Strong but simple: A Baseline for Domain Generalized Dense Perception by CLIP-based Transfer Learning
Christoph Hümmer, Manuel Schwonberg, Liangwei Zhou, Hu Cao, Alois Knoll, Hanno Gottschalk
Foundations for Transfer in Reinforcement Learning: A Taxonomy of Knowledge Modalities
Markus Wulfmeier, Arunkumar Byravan, Sarah Bechtle, Karol Hausman, Nicolas Heess
Transfer of Reinforcement Learning-Based Controllers from Model- to Hardware-in-the-Loop
Mario Picerno, Lucas Koch, Kevin Badalian, Marius Wegener, Joschka Schaub, Charles Robert Koch, Jakob Andert
Bridging the Human-AI Knowledge Gap: Concept Discovery and Transfer in AlphaZero
Lisa Schut, Nenad Tomasev, Tom McGrath, Demis Hassabis, Ulrich Paquet, Been Kim