Autoencoder Architecture
Autoencoders are neural networks designed to learn efficient data representations by compressing input data into a lower-dimensional latent space and then reconstructing it. Current research focuses on improving autoencoder architectures for specific tasks, including variations like convolutional autoencoders, masked autoencoders, and those incorporating attention mechanisms or other advanced techniques such as Koopman operator theory and Wasserstein distance. These advancements are driving progress in diverse fields, from image compression and anomaly detection (e.g., in ECGs and cybersecurity) to medical image analysis and scientific machine learning, enabling more efficient data processing and improved model performance.
Papers
July 13, 2022
July 7, 2022
April 11, 2022
April 1, 2022
March 13, 2022
January 28, 2022
January 23, 2022
December 24, 2021
December 13, 2021