Paper ID: 2302.03397

AniPixel: Towards Animatable Pixel-Aligned Human Avatar

Jinlong Fan, Jing Zhang, Zhi Hou, Dacheng Tao

Although human reconstruction typically results in human-specific avatars, recent 3D scene reconstruction techniques utilizing pixel-aligned features show promise in generalizing to new scenes. Applying these techniques to human avatar reconstruction can result in a volumetric avatar with generalizability but limited animatability due to rendering only being possible for static representations. In this paper, we propose AniPixel, a novel animatable and generalizable human avatar reconstruction method that leverages pixel-aligned features for body geometry prediction and RGB color blending. Technically, to align the canonical space with the target space and the observation space, we propose a bidirectional neural skinning field based on skeleton-driven deformation to establish the target-to-canonical and canonical-to-observation correspondences. Then, we disentangle the canonical body geometry into a normalized neutral-sized body and a subject-specific residual for better generalizability. As the geometry and appearance are closely related, we introduce pixel-aligned features to facilitate the body geometry prediction and detailed surface normals to reinforce the RGB color blending. We also devise a pose-dependent and view direction-related shading module to represent the local illumination variance. Experiments show that AniPixel renders comparable novel views while delivering better novel pose animation results than state-of-the-art methods.

Submitted: Feb 7, 2023