Hierarchical Representation
Hierarchical representation learning aims to capture the nested structure of data, mirroring the way humans organize information, to improve model performance and interpretability. Current research focuses on developing novel architectures, such as hierarchical transformers and energy-based models, and algorithms like contrastive learning and variational Bayes, to learn these representations effectively across diverse data types, including images, text, and time series. This work is significant because improved hierarchical representations lead to more robust, efficient, and explainable models with applications ranging from medical image analysis and recommendation systems to robotics and music generation.
Papers
June 5, 2022
May 26, 2022
May 25, 2022
May 6, 2022
March 16, 2022
January 21, 2022
December 22, 2021
December 14, 2021
November 26, 2021
November 23, 2021