Faithful Generation
Faithful generation focuses on creating outputs—text, images, audio, code, or other data—that accurately reflect a given input or prompt, prioritizing correctness and adherence to specifications. Current research emphasizes improving the fidelity and controllability of generation using various model architectures, including diffusion models, transformers, and variational autoencoders, often incorporating techniques like retrieval-augmented generation and multi-agent frameworks. This field is significant for advancing AI capabilities across numerous domains, from improving large language model evaluations and enhancing human-computer interaction to creating more realistic synthetic data for training and analysis in various scientific fields.
Papers
Towards Specification-Driven LLM-Based Generation of Embedded Automotive Software
Minal Suresh Patil, Gustav Ung, Mattias Nyberg
RAW-Diffusion: RGB-Guided Diffusion Models for High-Fidelity RAW Image Generation
Christoph Reinders, Radu Berdan, Beril Besbinar, Junji Otsuka, Daisuke Iso
ORID: Organ-Regional Information Driven Framework for Radiology Report Generation
Tiancheng Gu, Kaicheng Yang, Xiang An, Ziyong Feng, Dongnan Liu, Weidong Cai
Generating Compositional Scenes via Text-to-image RGBA Instance Generation
Alessandro Fontanella, Petru-Daniel Tudosiu, Yongxin Yang, Shifeng Zhang, Sarah Parisot
LLM4DS: Evaluating Large Language Models for Data Science Code Generation
Nathalia Nascimento, Everton Guimaraes, Sai Sanjna Chintakunta, Santhosh Anitha Boominathan
AnimateAnything: Consistent and Controllable Animation for Video Generation
Guojun Lei, Chi Wang, Hong Li, Rong Zhang, Yikai Wang, Weiwei Xu
Generation of synthetic gait data: application to multiple sclerosis patients' gait patterns
Klervi Le Gall, Lise Bellanger, David Laplaud
MCL: Multi-view Enhanced Contrastive Learning for Chest X-ray Report Generation
Kang Liu, Zhuoqi Ma, Kun Xie, Zhicheng Jiao, Qiguang Miao
Visual question answering based evaluation metrics for text-to-image generation
Mizuki Miyamoto, Ryugo Morita, Jinjia Zhou
Boundary Attention Constrained Zero-Shot Layout-To-Image Generation
Huancheng Chen, Jingtao Li, Weiming Zhuang, Haris Vikalo, Lingjuan Lyu
GaussianAnything: Interactive Point Cloud Latent Diffusion for 3D Generation
Yushi Lan, Shangchen Zhou, Zhaoyang Lyu, Fangzhou Hong, Shuai Yang, Bo Dai, Xingang Pan, Chen Change Loy
JanusFlow: Harmonizing Autoregression and Rectified Flow for Unified Multimodal Understanding and Generation
Yiyang Ma, Xingchao Liu, Xiaokang Chen, Wen Liu, Chengyue Wu, Zhiyu Wu, Zizheng Pan, Zhenda Xie, Haowei Zhang, Xingkai yu, Liang Zhao, Yisong Wang, Jiaying Liu, Chong Ruan
Evaluating the Generation of Spatial Relations in Text and Image Generative Models
Shang Hong Sim, Clarence Lee, Alvin Tan, Cheston Tan
Multimodal Clinical Reasoning through Knowledge-augmented Rationale Generation
Shuai Niu, Jing Ma, Liang Bai, Zhihua Wang, Yida Xu, Yunya Song, Xian Yang
IR image databases generation under target intrinsic thermal variability constraints
Jerome Gilles, Stephane Landeau, Tristan Dagobert, Philippe Chevalier, Christian Bolut
Génération de bases de données images IR sous contraintes avec variabilité thermique intrinsèque des cibles
Jerome Gilles, Stephane Landeau, Tristan Dagobert, Philippe Chevalier, Christian Bolut