Appearance Synthesis
Appearance synthesis aims to generate realistic images or videos of humans and objects, often driven by factors like pose, speech, or text descriptions. Current research heavily utilizes generative adversarial networks (GANs) and diffusion models, often incorporating techniques like disentanglement of shape and appearance, and employing 3D representations such as point clouds, Gaussian splatting, or neural radiance fields for improved realism and control. This field is crucial for advancements in virtual and augmented reality, digital content creation, and computer vision, enabling applications ranging from realistic avatar generation to high-fidelity video editing.
Papers
June 24, 2024
June 12, 2024
June 4, 2024
January 17, 2024
December 21, 2023
December 13, 2023
December 8, 2023
December 4, 2023
November 29, 2023
November 12, 2023
December 10, 2022
November 10, 2021