Supervised Autoencoder
Supervised autoencoders are neural networks trained to reconstruct input data (e.g., images, time series, 3D models) via a compressed latent representation, often used for dimensionality reduction, feature extraction, and anomaly detection. Current research emphasizes developing novel architectures like Kolmogorov-Arnold Networks and hierarchical autoencoders, and integrating autoencoders with other techniques such as diffusion models and contrastive learning to improve reconstruction quality and downstream task performance. This approach finds applications across diverse fields, from improving network throughput in autonomous vehicles to enhancing image generation and analysis in astronomy and medical imaging, demonstrating the broad utility of supervised autoencoders in data processing and analysis.
Papers
Position Prediction as an Effective Pretraining Strategy
Shuangfei Zhai, Navdeep Jaitly, Jason Ramapuram, Dan Busbridge, Tatiana Likhomanenko, Joseph Yitan Cheng, Walter Talbott, Chen Huang, Hanlin Goh, Joshua Susskind
Learning Flexible Translation between Robot Actions and Language Descriptions
Ozan Özdemir, Matthias Kerzel, Cornelius Weber, Jae Hee Lee, Stefan Wermter
Bootstrapped Masked Autoencoders for Vision BERT Pretraining
Xiaoyi Dong, Jianmin Bao, Ting Zhang, Dongdong Chen, Weiming Zhang, Lu Yuan, Dong Chen, Fang Wen, Nenghai Yu
A Meta-learning Formulation of the Autoencoder Problem for Non-linear Dimensionality Reduction
Andrey A. Popov, Arash Sarshar, Austin Chennault, Adrian Sandu