Target Preserving Blending
Target-preserving blending focuses on merging different data sources—images, videos, or even model parameters—to create a unified output while minimizing distortions and preserving the essential features of the original inputs. Current research explores this across various domains, employing techniques like diffusion models, implicit neural representations (INRs), and transformer architectures to achieve seamless blending in applications such as image editing, video harmonization, and 3D face modeling. These advancements improve the quality and efficiency of various computer vision and graphics tasks, leading to more realistic and natural-looking results in diverse applications.
Papers
September 17, 2024
July 11, 2024
June 16, 2024
February 13, 2024
January 18, 2024
December 7, 2023
April 16, 2023
March 14, 2023
March 9, 2023
December 27, 2022
November 11, 2022
September 16, 2022