Latent Point Diffusion Model
Latent point diffusion models are generative models leveraging the power of diffusion processes to create high-quality 3D point clouds and meshes. Current research focuses on improving generation quality, diversity, and controllability, often employing variational autoencoders (VAEs) in conjunction with denoising diffusion probabilistic models (DDPMs) and incorporating techniques like frequency rectification and hierarchical latent spaces. These advancements are impacting various fields, including medical image segmentation, adversarial patch generation, and continual learning, by enabling more efficient and effective data representation and generation.
Papers
FrePolad: Frequency-Rectified Point Latent Diffusion for Point Cloud Generation
Chenliang Zhou, Fangcheng Zhong, Param Hanji, Zhilin Guo, Kyle Fogarty, Alejandro Sztrajman, Hongyun Gao, Cengiz Oztireli
LION : Empowering Multimodal Large Language Model with Dual-Level Visual Knowledge
Gongwei Chen, Leyang Shen, Rui Shao, Xiang Deng, Liqiang Nie