Supervised Autoencoder
Supervised autoencoders are neural networks trained to reconstruct input data (e.g., images, time series, 3D models) via a compressed latent representation, often used for dimensionality reduction, feature extraction, and anomaly detection. Current research emphasizes developing novel architectures like Kolmogorov-Arnold Networks and hierarchical autoencoders, and integrating autoencoders with other techniques such as diffusion models and contrastive learning to improve reconstruction quality and downstream task performance. This approach finds applications across diverse fields, from improving network throughput in autonomous vehicles to enhancing image generation and analysis in astronomy and medical imaging, demonstrating the broad utility of supervised autoencoders in data processing and analysis.
Papers
Anomaly Detection in OKTA Logs using Autoencoders
Jericho Cain, Hayden Beadles, Karthik Venkatesan
SCAR: Sparse Conditioned Autoencoders for Concept Detection and Steering in LLMs
Ruben Härle, Felix Friedrich, Manuel Brack, Björn Deiseroth, Patrick Schramowski, Kristian Kersting
Mixed Effects Deep Learning Autoencoder for interpretable analysis of single cell RNA Sequencing data
Aixa X. Andrade, Son Nguyen, Albert Montillo
Unpacking SDXL Turbo: Interpreting Text-to-Image Models with Sparse Autoencoders
Viacheslav Surkov, Chris Wendler, Mikhail Terekhov, Justin Deschenaux, Robert West, Caglar Gulcehre
SeriesGAN: Time Series Generation via Adversarial and Autoregressive Learning
MohammadReza EskandariNasab, Shah Muhammad Hamdi, Soukaina Filali Boubrahimi
Simultaneous Unlearning of Multiple Protected User Attributes From Variational Autoencoder Recommenders Using Adversarial Training
Gustavo Escobedo, Christian Ganhör, Stefan Brandl, Mirjam Augstein, Markus Schedl