Generative Factor

Generative factors research aims to identify and represent the underlying independent sources of variation that generate complex data. Current efforts focus on developing generative models, including variational autoencoders and diffusion models, that learn disentangled representations of these factors, often employing techniques like causal flow and multiset tagging to improve interpretability and controllability. This work is significant because disentangled representations enhance model interpretability, improve generalization to unseen data combinations, and enable targeted manipulation of complex systems, with applications ranging from robotics to neuroscience and fair machine learning.

Papers