Pre Trained Generative Model
Pre-trained generative models are large-scale AI models trained on massive datasets to generate various data types, including images, text, and audio, with the primary objective of improving downstream task performance and enabling new applications. Current research focuses on enhancing model controllability, addressing biases, improving generalization to unseen data, and developing efficient fine-tuning methods, often employing architectures like transformers and diffusion models. These advancements are significantly impacting diverse fields, from robotics and healthcare to music generation and scientific discovery, by providing efficient ways to generate synthetic data, improve model adaptability, and enhance the capabilities of existing systems.
Papers
Cross-GAN Auditing: Unsupervised Identification of Attribute Level Similarities and Differences between Pretrained Generative Models
Matthew L. Olson, Shusen Liu, Rushil Anirudh, Jayaraman J. Thiagarajan, Peer-Timo Bremer, Weng-Keen Wong
Training Deep Boltzmann Networks with Sparse Ising Machines
Shaila Niazi, Navid Anjum Aadit, Masoud Mohseni, Shuvro Chowdhury, Yao Qin, Kerem Y. Camsari