Sketch to 3D
Sketch-to-3D research aims to automatically generate three-dimensional models from user-provided sketches, bridging the gap between intuitive design and digital fabrication. Current approaches leverage various deep learning architectures, including diffusion models, differentiable rendering frameworks, and generative adversarial networks, often incorporating techniques like depth prediction and style-consistent guidance to improve realism and fidelity. This field is significant for accelerating 3D content creation across diverse applications, from architectural design and urban planning to virtual and augmented reality experiences, by offering a more accessible and efficient alternative to traditional CAD methods.
Papers
July 17, 2024
May 24, 2024
April 2, 2024
September 22, 2023
July 8, 2023
February 14, 2023