Latent Space
Latent space refers to a lower-dimensional representation of high-dimensional data, aiming to capture essential features while reducing computational complexity and improving interpretability. Current research focuses on developing efficient algorithms and model architectures, such as variational autoencoders (VAEs), generative adversarial networks (GANs), and diffusion models, to learn and manipulate these latent spaces for tasks ranging from anomaly detection and image generation to controlling generative models and improving the efficiency of autonomous systems. This work has significant implications across diverse fields, enabling advancements in areas like drug discovery, autonomous driving, and cybersecurity through improved data analysis, model efficiency, and enhanced control over generative processes.
Papers
SpaceEditing: Integrating Human Knowledge into Deep Neural Networks via Interactive Latent Space Editing
Jiafu Wei, Ding Xia, Haoran Xie, Chia-Ming Chang, Chuntao Li, Xi Yang
Executing your Commands via Motion Diffusion in Latent Space
Xin Chen, Biao Jiang, Wen Liu, Zilong Huang, Bin Fu, Tao Chen, Jingyi Yu, Gang Yu
Realization of Causal Representation Learning to Adjust Confounding Bias in Latent Space
Jia Li, Xiang Li, Xiaowei Jia, Michael Steinbach, Vipin Kumar
On interpretability and proper latent decomposition of autoencoders
Luca Magri, Anh Khoa Doan
Clinically Plausible Pathology-Anatomy Disentanglement in Patient Brain MRI with Structured Variational Priors
Anjun Hu, Jean-Pierre R. Falet, Brennan S. Nichyporuk, Changjian Shui, Douglas L. Arnold, Sotirios A. Tsaftaris, Tal Arbel