Paper ID: 2312.01255 • Published Dec 3, 2023
Meta ControlNet: Enhancing Task Adaptation via Meta Learning
Junjie Yang, Jinze Zhao, Peihao Wang, Zhangyang Wang, Yingbin Liang
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
Diffusion-based image synthesis has attracted extensive attention recently.
In particular, ControlNet that uses image-based prompts exhibits powerful
capability in image tasks such as canny edge detection and generates images
well aligned with these prompts. However, vanilla ControlNet generally requires
extensive training of around 5000 steps to achieve a desirable control for a
single task. Recent context-learning approaches have improved its adaptability,
but mainly for edge-based tasks, and rely on paired examples. Thus, two
important open issues are yet to be addressed to reach the full potential of
ControlNet: (i) zero-shot control for certain tasks and (ii) faster adaptation
for non-edge-based tasks. In this paper, we introduce a novel Meta ControlNet
method, which adopts the task-agnostic meta learning technique and features a
new layer freezing design. Meta ControlNet significantly reduces learning steps
to attain control ability from 5000 to 1000. Further, Meta ControlNet exhibits
direct zero-shot adaptability in edge-based tasks without any finetuning, and
achieves control within only 100 finetuning steps in more complex non-edge
tasks such as Human Pose, outperforming all existing methods. The codes is
available in this https URL