Texture Synthesis
Texture synthesis aims to generate realistic textures, either from examples or textual descriptions, for various applications like 3D modeling and image editing. Current research heavily utilizes diffusion models, often combined with techniques like ControlNets and attention mechanisms, to improve texture consistency, detail, and alignment with underlying geometry, particularly in 3D contexts. This field is crucial for advancing computer graphics, virtual reality, and other domains requiring high-fidelity texture generation, with recent work focusing on improving speed, realism, and user control over the synthesis process. The development of robust evaluation metrics is also an active area of research.
Papers
Generic 3D Diffusion Adapter Using Controlled Multi-View Editing
Hansheng Chen, Ruoxi Shi, Yulin Liu, Bokui Shen, Jiayuan Gu, Gordon Wetzstein, Hao Su, Leonidas Guibas
InTeX: Interactive Text-to-texture Synthesis via Unified Depth-aware Inpainting
Jiaxiang Tang, Ruijie Lu, Xiaokang Chen, Xiang Wen, Gang Zeng, Ziwei Liu