Latent Code
Latent codes are low-dimensional representations of data, often learned by generative models, aiming to capture the essential features while discarding irrelevant details. Current research focuses on disentangling these codes to independently control specific attributes (e.g., color, style, pose) within images, videos, and other data modalities, often employing techniques like variational autoencoders, diffusion models, and normalizing flows. This work is significant because it enables fine-grained control over generative processes, leading to improved image editing, data augmentation, and more robust and interpretable machine learning models across diverse applications.
Papers
Self-Supervised Video Representation Learning via Latent Time Navigation
Di Yang, Yaohui Wang, Quan Kong, Antitza Dantcheva, Lorenzo Garattoni, Gianpiero Francesca, Francois Bremond
SHS-Net: Learning Signed Hyper Surfaces for Oriented Normal Estimation of Point Clouds
Qing Li, Huifang Feng, Kanle Shi, Yue Gao, Yi Fang, Yu-Shen Liu, Zhizhong Han