Latent Space
Latent space refers to a lower-dimensional representation of high-dimensional data, aiming to capture essential features while reducing computational complexity and improving interpretability. Current research focuses on developing efficient algorithms and model architectures, such as variational autoencoders (VAEs), generative adversarial networks (GANs), and diffusion models, to learn and manipulate these latent spaces for tasks ranging from anomaly detection and image generation to controlling generative models and improving the efficiency of autonomous systems. This work has significant implications across diverse fields, enabling advancements in areas like drug discovery, autonomous driving, and cybersecurity through improved data analysis, model efficiency, and enhanced control over generative processes.
Papers
Exploring 3D-aware Latent Spaces for Efficiently Learning Numerous Scenes
Antoine Schnepf, Karim Kassab, Jean-Yves Franceschi, Laurent Caraffa, Flavian Vasile, Jeremie Mary, Andrew Comport, Valérie Gouet-Brunet
HyperVQ: MLR-based Vector Quantization in Hyperbolic Space
Nabarun Goswami, Yusuke Mukuta, Tatsuya Harada
TruthX: Alleviating Hallucinations by Editing Large Language Models in Truthful Space
Shaolei Zhang, Tian Yu, Yang Feng
Molecule Design by Latent Prompt Transformer
Deqian Kong, Yuhao Huang, Jianwen Xie, Edouardo Honig, Ming Xu, Shuanghong Xue, Pei Lin, Sanping Zhou, Sheng Zhong, Nanning Zheng, Ying Nian Wu