Latent Variable
Latent variable modeling aims to uncover hidden factors underlying observed data, improving understanding of complex systems and enabling more accurate predictions. Current research focuses on developing robust and efficient algorithms for inferring these latent variables, particularly within variational autoencoders (VAEs), diffusion models, and generative adversarial networks (GANs), often incorporating techniques like disentanglement and causal discovery. These advancements are impacting diverse fields, from medical diagnostics (integrating genomic and imaging data) to recommender systems (mitigating bias) and neuroscience (interpreting neural activity), by providing more interpretable and informative representations of complex datasets.
Papers
Everything that can be learned about a causal structure with latent variables by observational and interventional probing schemes
Marina Maciel Ansanelli, Elie Wolfe, Robert W. Spekkens
On Statistical Rates and Provably Efficient Criteria of Latent Diffusion Transformers (DiTs)
Jerry Yao-Chieh Hu, Weimin Wu, Zhao Song, Han Liu
Latent Variable Sequence Identification for Cognitive Models with Neural Bayes Estimation
Ti-Fen Pan, Jing-Jing Li, Bill Thompson, Anne Collins
Encoder-Decoder Neural Networks in Interpretation of X-ray Spectra
Jalmari Passilahti, Anton Vladyka, Johannes Niskanen
Causal Inference with Latent Variables: Recent Advances and Future Prospectives
Yaochen Zhu, Yinhan He, Jing Ma, Mengxuan Hu, Sheng Li, Jundong Li