Texture Synthesis
Texture synthesis aims to generate realistic textures, either from examples or textual descriptions, for various applications like 3D modeling and image editing. Current research heavily utilizes diffusion models, often combined with techniques like ControlNets and attention mechanisms, to improve texture consistency, detail, and alignment with underlying geometry, particularly in 3D contexts. This field is crucial for advancing computer graphics, virtual reality, and other domains requiring high-fidelity texture generation, with recent work focusing on improving speed, realism, and user control over the synthesis process. The development of robust evaluation metrics is also an active area of research.
Papers
SceneTex: High-Quality Texture Synthesis for Indoor Scenes via Diffusion Priors
Dave Zhenyu Chen, Haoxuan Li, Hsin-Ying Lee, Sergey Tulyakov, Matthias Nießner
ConTex-Human: Free-View Rendering of Human from a Single Image with Texture-Consistent Synthesis
Xiangjun Gao, Xiaoyu Li, Chaopeng Zhang, Qi Zhang, Yanpei Cao, Ying Shan, Long Quan
Learning in a Single Domain for Non-Stationary Multi-Texture Synthesis
Xudong Xie, Zhen Zhu, Zijie Wu, Zhiliang Xu, Yingying Zhu
Text-guided High-definition Consistency Texture Model
Zhibin Tang, Tiantong He
Reference-based OCT Angiogram Super-resolution with Learnable Texture Generation
Yuyan Ruan, Dawei Yang, Ziqi Tang, An Ran Ran, Carol Y. Cheung, Hao Chen