Wasserstein Autoencoders
Wasserstein autoencoders (WAEs) are generative models that leverage the Wasserstein distance to improve upon the limitations of traditional variational autoencoders (VAEs). Current research focuses on enhancing WAE efficiency and interpretability, particularly by analyzing their statistical properties and developing novel architectures like those incorporating merge trees or Gromov-Wasserstein distances for handling complex data structures. This work aims to provide stronger theoretical guarantees for WAE performance and enable applications in diverse fields, including dimensionality reduction, data compression, and scientific modeling, as demonstrated by their use in areas like inertial confinement fusion. The development of efficient algorithms, such as amortized projection optimization for sliced Wasserstein distances, is also a key area of investigation.