Loss Design
Loss design in machine learning focuses on crafting objective functions that effectively guide model training, optimizing performance metrics while addressing challenges like noise, robustness, and generalization. Current research explores diverse loss functions tailored to specific model architectures (e.g., generative flow networks, decision trees) and learning paradigms (e.g., multi-agent reinforcement learning, online learning), investigating the impact of loss function properties on exploration, exploitation, and convergence. These advancements are crucial for improving model accuracy, stability, and reliability across various applications, from generative modeling and autonomous systems to robust classification and data-efficient training.
Papers
A view of mini-batch SGD via generating functions: conditions of convergence, phase transitions, benefit from negative momenta
Maksim Velikanov, Denis Kuznedelev, Dmitry Yarotsky
Dynamic Restrained Uncertainty Weighting Loss for Multitask Learning of Vocal Expression
Meishu Song, Zijiang Yang, Andreas Triantafyllopoulos, Xin Jing, Vincent Karas, Xie Jiangjian, Zixing Zhang, Yamamoto Yoshiharu, Bjoern W. Schuller