Sketch Based 3D Shape Retrieval
Sketch-based 3D shape retrieval aims to find 3D models that match a user's hand-drawn sketch, bridging the significant visual gap between 2D sketches and 3D representations. Current research focuses on improving robustness to noisy or varied sketch styles, often employing deep learning architectures like Vision Transformers (ViTs) and ResNets, sometimes fine-tuned on specific shape classes or utilizing generative adversarial networks (GANs) for zero-shot retrieval. These advancements address limitations in handling sketch variability and unseen object categories, ultimately improving the accuracy and efficiency of 3D model search using intuitive 2D input. This field has significant implications for various applications, including CAD design, virtual reality, and computer-aided manufacturing.