Supervised Autoencoder
Supervised autoencoders are neural networks trained to reconstruct input data (e.g., images, time series, 3D models) via a compressed latent representation, often used for dimensionality reduction, feature extraction, and anomaly detection. Current research emphasizes developing novel architectures like Kolmogorov-Arnold Networks and hierarchical autoencoders, and integrating autoencoders with other techniques such as diffusion models and contrastive learning to improve reconstruction quality and downstream task performance. This approach finds applications across diverse fields, from improving network throughput in autonomous vehicles to enhancing image generation and analysis in astronomy and medical imaging, demonstrating the broad utility of supervised autoencoders in data processing and analysis.
Papers
Phase-aware Training Schedule Simplifies Learning in Flow-Based Generative Models
Santiago Aranguri, Francesco Insulla
Paired Wasserstein Autoencoders for Conditional Sampling
Moritz Piening, Matthias Chung
DFREC: DeepFake Identity Recovery Based on Identity-aware Masked Autoencoder
Peipeng Yu, Hui Gao, Zhitao Huang, Zhihua Xia, Chip-Hong Chang
Adversarial Autoencoders in Operator Learning
Dustin Enyeart, Guang Lin
Class-wise Autoencoders Measure Classification Difficulty And Detect Label Mistakes
Jacob Marks, Brent A. Griffin, Jason J. Corso
Reproduction of AdEx dynamics on neuromorphic hardware through data embedding and simulation-based inference
Jakob Huhle, Jakob Kaiser, Eric Müller, Johannes Schemmel
Transformer-based Koopman Autoencoder for Linearizing Fisher's Equation
Kanav Singh Rana, Nitu Kumari
An Automated Data Mining Framework Using Autoencoders for Feature Extraction and Dimensionality Reduction
Yaxin Liang, Xinshi Li, Xin Huang, Ziqi Zhang, Yue Yao
Extending Video Masked Autoencoders to 128 frames
Nitesh Bharadwaj Gundavarapu, Luke Friedman, Raghav Goyal, Chaitra Hegde, Eirikur Agustsson, Sagar M. Waghmare, Mikhail Sirotenko, Ming-Hsuan Yang, Tobias Weyand, Boqing Gong, Leonid Sigal
Combining Autoregressive and Autoencoder Language Models for Text Classification
João Gonçalves
Compute Optimal Inference and Provable Amortisation Gap in Sparse Autoencoders
Charles O'Neill, David Klindt