Paper ID: 2204.03842

From 2D Images to 3D Model:Weakly Supervised Multi-View Face Reconstruction with Deep Fusion

Weiguang Zhao, Chaolong Yang, Jianan Ye, Rui Zhang, Yuyao Yan, Xi Yang, Bin Dong, Amir Hussain, Kaizhu Huang

While weakly supervised multi-view face reconstruction (MVR) is garnering increased attention, one critical issue still remains open: how to effectively fuse multiple image information to reconstruct high-precision 3D models. In this regard, we propose a novel model called Deep Fusion MVR (DF-MVR) to reconstruct high-precision 3D facial shapes from multi-view images. Specifically, we introduce MulEn-Unet, a multi-view encoding to single decoding framework with skip connections and attention. This design allows for the extraction, integration, and compensation of deep features with attention from multi-view images. Furthermore, we adopt the involution kernel to enrich deep fusion features with channel features. In addition, we develop the face parse network to learn, identify, and emphasize the critical common face area within multi-view images. Experiments on Pixel-Face and Bosphorus datasets indicate the superiority of our model. Without 3D annotation, DF-MVR achieves 5.2% and 3.0% RMSE improvement over the existing weakly supervised MVRs respectively on Pixel-Face and Bosphorus dataset. Code will be available publicly at https://github.com/weiguangzhao/DF_MVR.

Submitted: Apr 8, 2022