Intermediate Latent
Intermediate latent representations are learned compressed features extracted from data by machine learning models, aiming to capture essential information while reducing dimensionality. Current research focuses on improving the efficiency and interpretability of these representations, employing techniques like variational autoencoders, diffusion models, and tensor factorization within various architectures including LLMs, LVMs, and GANs. This work is significant for enhancing model efficiency, improving generalization, mitigating biases, and enabling explainability in diverse applications ranging from image generation and processing to personalized federated learning and healthcare. The development of robust and interpretable latent spaces is crucial for advancing numerous fields.
Papers
Latent Safety-Constrained Policy Approach for Safe Offline Reinforcement Learning
Prajwal Koirala, Zhanhong Jiang, Soumik Sarkar, Cody Fleming
LatentQA: Teaching LLMs to Decode Activations Into Natural Language
Alexander Pan, Lijie Chen, Jacob Steinhardt
RealOSR: Latent Unfolding Boosting Diffusion-based Real-world Omnidirectional Image Super-Resolution
Xuhan Sheng, Runyi Li, Bin Chen, Weiqi Li, Xu Jiang, Jian Zhang