Generative Model
Generative models are artificial intelligence systems designed to create new data instances that resemble a training dataset, aiming to learn and replicate the underlying data distribution. Current research emphasizes improving efficiency and controllability, focusing on architectures like diffusion models, autoregressive models, and generative flow networks, as well as refining training algorithms and loss functions. These advancements have significant implications across diverse fields, enabling applications such as realistic image and music generation, protein design, and improved data augmentation techniques for various machine learning tasks.
Papers
Efficient Fine-Tuning and Concept Suppression for Pruned Diffusion Models
Reza Shirkavand, Peiran Yu, Shangqian Gao, Gowthami Somepalli, Tom Goldstein, Heng Huang
Jet: A Modern Transformer-Based Normalizing Flow
Alexander Kolesnikov, André Susano Pinto, Michael Tschannen
Active Inference and Human--Computer Interaction
Roderick Murray-Smith, John H. Williamson, Sebastian Stein
DiffSim: Taming Diffusion Models for Evaluating Visual Similarity
Yiren Song, Xiaokang Liu, Mike Zheng Shou
Is Your World Simulator a Good Story Presenter? A Consecutive Events-Based Benchmark for Future Long Video Generation
Yiping Wang, Xuehai He, Kuan Wang, Luyao Ma, Jianwei Yang, Shuohang Wang, Simon Shaolei Du, Yelong Shen
Posterior Mean Matching: Generative Modeling through Online Bayesian Inference
Sebastian Salazar, Michal Kucer, Yixin Wang, Emily Casleton, David Blei
Addressing Small and Imbalanced Medical Image Datasets Using Generative Models: A Comparative Study of DDPM and PGGANs with Random and Greedy K Sampling
Iman Khazrak, Shakhnoza Takhirova, Mostafa M. Rezaee, Mehrdad Yadollahi, Robert C. Green II, Shuteng Niu
InterDyn: Controllable Interactive Dynamics with Video Diffusion Models
Rick Akkerman, Haiwen Feng, Michael J. Black, Dimitrios Tzionas, Victoria Fernández Abrevaya
IDEA-Bench: How Far are Generative Models from Professional Designing?
Chen Liang, Lianghua Huang, Jingwu Fang, Huanzhang Dou, Wei Wang, Zhi-Fan Wu, Yupeng Shi, Junge Zhang, Xin Zhao, Yu Liu
FedCAR: Cross-client Adaptive Re-weighting for Generative Models in Federated Learning
Minjun Kim, Minjee Kim, Jinhoon Jeong
Biased or Flawed? Mitigating Stereotypes in Generative Language Models by Addressing Task-Specific Flaws
Akshita Jha, Sanchit Kabra, Chandan K. Reddy
Composers' Evaluations of an AI Music Tool: Insights for Human-Centred Design
Eleanor Row, György Fazekas
SoftVQ-VAE: Efficient 1-Dimensional Continuous Tokenizer
Hao Chen, Ze Wang, Xiang Li, Ximeng Sun, Fangyi Chen, Jiang Liu, Jindong Wang, Bhiksha Raj, Zicheng Liu, Emad Barsoum
Generative Modeling with Diffusion
Justin Le
Diffusion Model from Scratch
Wang Zhen, Dong Yunyun
EvalGIM: A Library for Evaluating Generative Image Models
Melissa Hall, Oscar Mañas, Reyhane Askari, Mark Ibrahim, Candace Ross, Pietro Astolfi, Tariq Berrada Ifriqi, Marton Havasi, Yohann Benchetrit, Karen Ullrich, Carolina Braga, Abhishek Charnalia, Maeve Ryan, Mike Rabbat, Michal Drozdzal, Jakob Verbeek, Adriana Romero Soriano
Efficient Generative Modeling with Residual Vector Quantization-Based Tokens
Jaehyeon Kim, Taehong Moon, Keon Lee, Jaewoong Cho