Attention Based Aggregation
Attention-based aggregation is a technique used to improve the performance of machine learning models by selectively weighting different parts of input data, focusing on the most relevant information. Current research focuses on applying this technique within various architectures, including graph neural networks and transformers, to enhance tasks such as fraud detection, activity recognition, and multi-task learning in federated settings. This approach offers significant improvements in accuracy and efficiency across diverse applications, from computer vision and natural language processing to healthcare and particle physics, by enabling more robust and informative feature representation learning.
Papers
August 1, 2024
July 22, 2024
March 6, 2024
November 22, 2023
October 25, 2023
October 23, 2023
August 7, 2023
July 4, 2023
May 24, 2023
May 3, 2023
December 9, 2022
November 24, 2022
April 16, 2022
January 6, 2022
December 27, 2021