3D Shape
3D shape research focuses on accurately representing and manipulating three-dimensional objects from various data sources, aiming for robust and efficient methods for reconstruction, generation, and manipulation. Current research emphasizes the development and application of deep learning models, including diffusion models, transformers, and implicit neural representations (like signed distance functions and Gaussian splatting), often incorporating techniques like point cloud processing and multi-view geometry. These advancements have significant implications for diverse fields, such as robotics, computer-aided design, medical imaging, and cultural heritage preservation, by enabling more accurate 3D modeling and analysis from limited or noisy data.
Papers
Coloring the Past: Neural Historical Buildings Reconstruction from Archival Photography
David Komorowicz, Lu Sang, Ferdinand Maiwald, Daniel Cremers
ShapeGPT: 3D Shape Generation with A Unified Multi-modal Language Model
Fukun Yin, Xin Chen, Chi Zhang, Biao Jiang, Zibo Zhao, Jiayuan Fan, Gang Yu, Taihao Li, Tao Chen
StructRe: Rewriting for Structured Shape Modeling
Jiepeng Wang, Hao Pan, Yang Liu, Xin Tong, Taku Komura, Wenping Wang