Avatar Generation
Avatar generation focuses on creating realistic and animatable 3D human (and sometimes animal) models from various input modalities, including text descriptions, images, and videos. Current research heavily utilizes diffusion models, often combined with neural radiance fields (NeRFs) or parametric body models like SMPL-X, to achieve high-fidelity results and control over attributes like pose, expression, and clothing. This field is significant due to its potential applications in diverse areas such as virtual and augmented reality, gaming, film, and robotics, driving advancements in both computer graphics and artificial intelligence.
Papers
Disentangled Clothed Avatar Generation from Text Descriptions
Jionghao Wang, Yuan Liu, Zhiyang Dou, Zhengming Yu, Yongqing Liang, Cheng Lin, Xin Li, Wenping Wang, Rong Xie, Li Song
Reality's Canvas, Language's Brush: Crafting 3D Avatars from Monocular Video
Yuchen Rao, Eduardo Perez Pellitero, Benjamin Busam, Yiren Zhou, Jifei Song