Supervised Autoencoder
Supervised autoencoders are neural networks trained to reconstruct input data (e.g., images, time series, 3D models) via a compressed latent representation, often used for dimensionality reduction, feature extraction, and anomaly detection. Current research emphasizes developing novel architectures like Kolmogorov-Arnold Networks and hierarchical autoencoders, and integrating autoencoders with other techniques such as diffusion models and contrastive learning to improve reconstruction quality and downstream task performance. This approach finds applications across diverse fields, from improving network throughput in autonomous vehicles to enhancing image generation and analysis in astronomy and medical imaging, demonstrating the broad utility of supervised autoencoders in data processing and analysis.
Papers
PAME: Self-Supervised Masked Autoencoder for No-Reference Point Cloud Quality Assessment
Ziyu Shan, Yujie Zhang, Qi Yang, Haichen Yang, Yiling Xu, Shan Liu
T4P: Test-Time Training of Trajectory Prediction via Masked Autoencoder and Actor-specific Token Memory
Daehee Park, Jaeseok Jeong, Sung-Hoon Yoon, Jaewoo Jeong, Kuk-Jin Yoon
Rethinking Autoencoders for Medical Anomaly Detection from A Theoretical Perspective
Yu Cai, Hao Chen, Kwang-Ting Cheng
SELECTOR: Heterogeneous graph network with convolutional masked autoencoder for multimodal robust prediction of cancer survival
Liangrui Pan, Yijun Peng, Yan Li, Xiang Wang, Wenjuan Liu, Liwen Xu, Qingchun Liang, Shaoliang Peng