Latent Variable
Latent variable modeling aims to uncover hidden factors underlying observed data, improving understanding of complex systems and enabling more accurate predictions. Current research focuses on developing robust and efficient algorithms for inferring these latent variables, particularly within variational autoencoders (VAEs), diffusion models, and generative adversarial networks (GANs), often incorporating techniques like disentanglement and causal discovery. These advancements are impacting diverse fields, from medical diagnostics (integrating genomic and imaging data) to recommender systems (mitigating bias) and neuroscience (interpreting neural activity), by providing more interpretable and informative representations of complex datasets.
Papers
Fair In-Context Learning via Latent Concept Variables
Karuna Bhaila, Minh-Hao Van, Kennedy Edemacu, Chen Zhao, Feng Chen, Xintao Wu
Combining Induction and Transduction for Abstract Reasoning
Wen-Ding Li, Keya Hu, Carter Larsen, Yuqing Wu, Simon Alford, Caleb Woo, Spencer M. Dunn, Hao Tang, Michelangelo Naim, Dat Nguyen, Wei-Long Zheng, Zenna Tavares, Yewen Pu, Kevin Ellis
Cognitive phantoms in LLMs through the lens of latent variables
Sanne Peereboom, Inga Schwabe, Bennett Kleinberg
Half-VAE: An Encoder-Free VAE to Bypass Explicit Inverse Mapping
Yuan-Hao Wei, Yan-Jie Sun, Chen Zhang
On Evaluation of Vision Datasets and Models using Human Competency Frameworks
Rahul Ramachandran, Tejal Kulkarni, Charchit Sharma, Deepak Vijaykeerthy, Vineeth N Balasubramanian