Task Loss
Task loss, central to multi-task learning (MTL), focuses on optimizing the balance between individual task objectives within a shared neural network to prevent suboptimal performance due to conflicting gradients or noisy data. Current research emphasizes developing sophisticated weighting methods, such as uncertainty-based or excess risk-based approaches, and novel optimization algorithms (e.g., gradient manipulation techniques) to achieve better task balancing and Pareto optimality. These advancements aim to improve the efficiency and effectiveness of MTL across diverse applications, from manufacturing process optimization to natural language understanding, by enabling more robust and accurate model training.
Papers
August 15, 2024
July 31, 2024
June 5, 2024
February 3, 2024
January 24, 2024
December 8, 2023
August 23, 2023
June 6, 2023
January 10, 2023
September 23, 2022
August 18, 2022
March 29, 2022
February 2, 2022