Thin Air
Research on "thin air" – generating data or models from limited or nonexistent information – focuses on leveraging pre-trained generative models like Stable Diffusion to create synthetic training data for improved image classification and other tasks, or to enhance the robustness of models against adversarial attacks. Current efforts explore techniques such as diffusion inversion, disentangled cross-attention editing, and style transfer to address challenges like bias mitigation in image generation and efficient knowledge transfer in adversarial robustness distillation. This work has significant implications for improving the efficiency and fairness of machine learning models, particularly in scenarios with limited real-world data.
Papers
March 28, 2024
May 24, 2023
May 22, 2023
March 21, 2023
June 2, 2022
February 21, 2022
February 14, 2022