3D Human Reconstruction
3D human reconstruction aims to create realistic three-dimensional models of humans from various input sources, such as images, videos, and sensor data. Current research heavily utilizes deep learning, focusing on implicit functions, diffusion models, and transformer networks to address challenges like occlusion, sparse data, and the need for robust multi-modal fusion. These advancements are improving the accuracy, detail, and efficiency of 3D human models, with significant implications for applications in animation, gaming, virtual reality, healthcare, and human-computer interaction. The field is also exploring novel approaches using thermal imaging and sketch-based reconstruction.
Papers
RemoCap: Disentangled Representation Learning for Motion Capture
Hongsheng Wang, Lizao Zhang, Zhangnan Zhong, Shuolin Xu, Xinrui Zhou, Shengyu Zhang, Huahao Xu, Fei Wu, Feng Lin
Gaussian Control with Hierarchical Semantic Graphs in 3D Human Recovery
Hongsheng Wang, Weiyue Zhang, Sihao Liu, Xinrui Zhou, Jing Li, Zhanyun Tang, Shengyu Zhang, Fei Wu, Feng Lin