Attention Based Reasoning
Attention-based reasoning investigates how computational models can selectively focus on relevant information to perform complex tasks, mirroring human cognitive processes. Current research emphasizes improving the interpretability of attention mechanisms, particularly within transformer-based architectures, by aligning model attention with human attention patterns (e.g., gaze data) and developing methods to explain model decisions based on attended features. This work is significant for enhancing the robustness, explainability, and ultimately the trustworthiness of AI systems across diverse applications, including visual question answering, sentiment analysis, and group activity recognition.
Papers
October 29, 2024
October 21, 2024
May 16, 2024
March 23, 2023
March 15, 2023
September 8, 2022
June 10, 2022
May 25, 2022
December 11, 2021