Deep Learning Representation
Deep learning representations, the internal feature maps learned by neural networks, are crucial for understanding how these models process information and make predictions. Current research focuses on improving the quality and interpretability of these representations, exploring techniques like self-supervised learning and graph-based models to enhance performance on various downstream tasks, including medical image analysis and spatiotemporal modeling. This work is significant because it addresses challenges like concept drift detection, data scarcity, and the need for efficient, explainable AI systems across diverse applications, ultimately leading to more robust and reliable deep learning models.
Papers
June 24, 2024
May 14, 2024
April 29, 2024
January 9, 2024
July 28, 2023
May 26, 2023
November 11, 2022
October 26, 2022
March 22, 2022