Compact Latent
Compact latent representations aim to efficiently encode high-dimensional data into lower-dimensional spaces, preserving crucial information while minimizing redundancy. Current research focuses on developing novel architectures, such as variational autoencoders (VAEs) and self-attention networks, to achieve this compression, often within hyperbolic spaces or through information-theoretic approaches like the information bottleneck principle. This work is significant for improving the efficiency and performance of various machine learning tasks, including image reconstruction, multimodal learning, and autonomous driving, by enabling faster processing and better generalization.
Papers
November 12, 2024
June 11, 2024
April 15, 2024
March 18, 2024
March 5, 2024
January 31, 2024
December 20, 2023
September 27, 2023
June 16, 2023
June 1, 2023
May 22, 2023
May 12, 2023
February 10, 2023
February 2, 2023
October 8, 2022
August 21, 2022