Generative Adversarial Network
Generative Adversarial Networks (GANs) are a class of deep learning models designed to generate new data instances that resemble a training dataset. Current research focuses on improving GAN training stability, enhancing the quality and diversity of generated data, and applying GANs to diverse fields like medical imaging, drug discovery, and time series analysis, often incorporating techniques like contrastive learning and disentangled representation learning to improve model performance and interpretability. The ability of GANs to synthesize realistic data addresses critical limitations in data availability and annotation costs across numerous scientific disciplines and practical applications, leading to advancements in areas ranging from medical diagnosis to robotic control.
Papers
Enhancing Fingerprint Image Synthesis with GANs, Diffusion Models, and Style Transfer Techniques
W. Tang, D. Figueroa, D. Liu, K. Johnsson, A. Sopasakis
The Bid Picture: Auction-Inspired Multi-player Generative Adversarial Networks Training
Joo Yong Shim, Jean Seong Bjorn Choe, Jong-Kook Kim
IIDM: Image-to-Image Diffusion Model for Semantic Image Synthesis
Feng Liu, Xiaobin-Chang
A General Method to Incorporate Spatial Information into Loss Functions for GAN-based Super-resolution Models
Xijun Wang, Santiago López-Tapia, Alice Lucas, Xinyi Wu, Rafael Molina, Aggelos K. Katsaggelos
A survey of synthetic data augmentation methods in computer vision
Alhassan Mumuni, Fuseini Mumuni, Nana Kobina Gerrar
Cyclical Log Annealing as a Learning Rate Scheduler
Philip Naveen
Attack Deterministic Conditional Image Generative Models for Diverse and Controllable Generation
Tianyi Chu, Wei Xing, Jiafu Chen, Zhizhong Wang, Jiakai Sun, Lei Zhao, Haibo Chen, Huaizhong Lin
CoroNetGAN: Controlled Pruning of GANs via Hypernetworks
Aman Kumar, Khushboo Anand, Shubham Mandloi, Ashutosh Mishra, Avinash Thakur, Neeraj Kasera, Prathosh A P
Point Cloud Compression via Constrained Optimal Transport
Zezeng Li, Weimin Wang, Ziliang Wang, Na Lei
Quantifying and Mitigating Privacy Risks for Tabular Generative Models
Chaoyi Zhu, Jiayi Tang, Hans Brouwer, Juan F. Pérez, Marten van Dijk, Lydia Y. Chen
Auxiliary CycleGAN-guidance for Task-Aware Domain Translation from Duplex to Monoplex IHC Images
Nicolas Brieu, Nicolas Triltsch, Philipp Wortmann, Dominik Winter, Shashank Saran, Marlon Rebelatto, Günter Schmidt
Data-Independent Operator: A Training-Free Artifact Representation Extractor for Generalizable Deepfake Detection
Chuangchuang Tan, Ping Liu, RenShuai Tao, Huan Liu, Yao Zhao, Baoyuan Wu, Yunchao Wei
3D-aware Image Generation and Editing with Multi-modal Conditions
Bo Li, Yi-ke Li, Zhi-fen He, Bin Liu, Yun-Kun Lai