Generative Training
Generative training focuses on developing models that can generate new data instances resembling a training dataset, aiming to improve data efficiency and address data scarcity issues in various applications. Current research emphasizes enhancing the robustness and stability of generative models, particularly through the integration of techniques like diffusion models, adversarial training, and Bayesian methods, often within frameworks like Generative Adversarial Imitation Learning (GAIL) and flow networks. These advancements are impacting diverse fields, including robotics, audio-visual processing, chemistry, and healthcare, by enabling more efficient model training and improved performance on tasks such as image generation, physical law discovery, and synthetic data creation for downstream applications.
Papers
IG Captioner: Information Gain Captioners are Strong Zero-shot Classifiers
Chenglin Yang, Siyuan Qiao, Yuan Cao, Yu Zhang, Tao Zhu, Alan Yuille, Jiahui Yu
Efficient Dataset Distillation via Minimax Diffusion
Jianyang Gu, Saeed Vahidian, Vyacheslav Kungurtsev, Haonan Wang, Wei Jiang, Yang You, Yiran Chen