Formality Transfer
Formality transfer, in its broadest sense, aims to leverage knowledge learned in one context (the "source") to improve performance in a related but different context (the "target"). Current research focuses on adapting this concept across diverse domains, employing techniques like transfer learning within neural networks (including transformers and convolutional neural networks), multi-armed bandit algorithms, and knowledge distillation. This research is significant because it addresses the challenge of data scarcity in many applications, improving efficiency and performance in areas such as financial prediction, robotic manipulation, and natural language processing.
Papers
Perfectly Balanced: Improving Transfer and Robustness of Supervised Contrastive Learning
Mayee F. Chen, Daniel Y. Fu, Avanika Narayan, Michael Zhang, Zhao Song, Kayvon Fatahalian, Christopher Ré
Human Judgement as a Compass to Navigate Automatic Metrics for Formality Transfer
Huiyuan Lai, Jiali Mao, Antonio Toral, Malvina Nissim
Performance of Deep Learning models with transfer learning for multiple-step-ahead forecasts in monthly time series
Martín Solís, Luis-Alexander Calvo-Valverde
CrossAligner & Co: Zero-Shot Transfer Methods for Task-Oriented Cross-lingual Natural Language Understanding
Milan Gritta, Ruoyu Hu, Ignacio Iacobacci