Cross Task
Cross-task learning focuses on training models to perform multiple related tasks simultaneously, leveraging shared information to improve efficiency and performance compared to training separate models for each task. Current research emphasizes developing architectures that effectively manage cross-task interactions, including transformer-based models and those incorporating mechanisms like mixture-of-experts and attention modules to selectively share information between tasks. This approach is proving valuable in diverse fields, such as healthcare prediction, scene understanding, and natural language processing, by enabling more robust and data-efficient models for complex applications.
Papers
October 22, 2024
September 27, 2024
August 27, 2024
August 18, 2024
July 29, 2024
June 17, 2024
June 15, 2024
May 17, 2024
April 23, 2024
April 14, 2024
April 2, 2024
February 6, 2024
October 25, 2023
June 14, 2023
June 8, 2023
February 25, 2023
February 7, 2023
December 29, 2022
December 1, 2022