Formality Transfer
Formality transfer, in its broadest sense, aims to leverage knowledge learned in one context (the "source") to improve performance in a related but different context (the "target"). Current research focuses on adapting this concept across diverse domains, employing techniques like transfer learning within neural networks (including transformers and convolutional neural networks), multi-armed bandit algorithms, and knowledge distillation. This research is significant because it addresses the challenge of data scarcity in many applications, improving efficiency and performance in areas such as financial prediction, robotic manipulation, and natural language processing.
Papers
Local transfer learning Gaussian process modeling, with applications to surrogate modeling of expensive computer simulators
Xinming Wang, Simon Mak, John Miller, Jianguo Wu
TransAgent: Transfer Vision-Language Foundation Models with Heterogeneous Agent Collaboration
Yiwei Guo, Shaobin Zhuang, Kunchang Li, Yu Qiao, Yali Wang
Multichannel Attention Networks with Ensembled Transfer Learning to Recognize Bangla Handwritten Charecter
Farhanul Haque, Md. Al-Hasan, Sumaiya Tabssum Mou, Abu Saleh Musa Miah, Jungpil Shin, Md Abdur Rahim
Knowledge Sharing and Transfer via Centralized Reward Agent for Multi-Task Reinforcement Learning
Haozhe Ma, Zhengding Luo, Thanh Vinh Vo, Kuankuan Sima, Tze-Yun Leong