Supervised Autoencoder
Supervised autoencoders are neural networks trained to reconstruct input data (e.g., images, time series, 3D models) via a compressed latent representation, often used for dimensionality reduction, feature extraction, and anomaly detection. Current research emphasizes developing novel architectures like Kolmogorov-Arnold Networks and hierarchical autoencoders, and integrating autoencoders with other techniques such as diffusion models and contrastive learning to improve reconstruction quality and downstream task performance. This approach finds applications across diverse fields, from improving network throughput in autonomous vehicles to enhancing image generation and analysis in astronomy and medical imaging, demonstrating the broad utility of supervised autoencoders in data processing and analysis.
Papers
Quantised Global Autoencoder: A Holistic Approach to Representing Visual Data
Tim Elsner, Paula Usinger, Victor Czech, Gregor Kobsik, Yanjiang He, Isaak Lim, Leif Kobbelt
Universal Sound Separation with Self-Supervised Audio Masked Autoencoder
Junqi Zhao, Xubo Liu, Jinzheng Zhao, Yi Yuan, Qiuqiang Kong, Mark D. Plumbley, Wenwu Wang
Global atmospheric data assimilation with multi-modal masked autoencoders
Thomas J. Vandal, Kate Duffy, Daniel McDuff, Yoni Nachmany, Chris Hartshorn
Hack Me If You Can: Aggregating AutoEncoders for Countering Persistent Access Threats Within Highly Imbalanced Data
Sidahmed Benabderrahmane, Ngoc Hoang, Petko Valtchev, James Cheney, Talal Rahwan
Autoencoder based approach for the mitigation of spurious correlations
Srinitish Srinivasan, Karthik Seemakurthy