Latent Shift

Latent shift research focuses on addressing challenges arising when the underlying, unobserved factors driving data distributions change between datasets or over time. Current work explores methods to disentangle these latent shifts, often employing techniques like self-training, latent vector manipulation within generative models (e.g., using neural networks to shift latent vectors in StyleGAN), and adapting models to handle high-dimensional data (e.g., using recognition-parametrized models). This research is crucial for improving the robustness and generalizability of machine learning models across diverse and evolving data scenarios, with applications ranging from text-to-video generation and image compression to healthcare and face anti-spoofing.

Papers