Faithful Generation
Faithful generation focuses on creating outputs—text, images, audio, code, or other data—that accurately reflect a given input or prompt, prioritizing correctness and adherence to specifications. Current research emphasizes improving the fidelity and controllability of generation using various model architectures, including diffusion models, transformers, and variational autoencoders, often incorporating techniques like retrieval-augmented generation and multi-agent frameworks. This field is significant for advancing AI capabilities across numerous domains, from improving large language model evaluations and enhancing human-computer interaction to creating more realistic synthetic data for training and analysis in various scientific fields.
Papers
TrICy: Trigger-guided Data-to-text Generation with Intent aware Attention-Copy
Vibhav Agarwal, Sourav Ghosh, Harichandana BSS, Himanshu Arora, Barath Raj Kandur Raja
TURNA: A Turkish Encoder-Decoder Language Model for Enhanced Understanding and Generation
Gökçe Uludoğan, Zeynep Yirmibeşoğlu Balal, Furkan Akkurt, Melikşah Türker, Onur Güngör, Susan Üsküdarlı
BootPIG: Bootstrapping Zero-shot Personalized Image Generation Capabilities in Pretrained Diffusion Models
Senthil Purushwalkam, Akash Gokul, Shafiq Joty, Nikhil Naik
MedXChat: A Unified Multimodal Large Language Model Framework towards CXRs Understanding and Generation
Ling Yang, Zhanyu Wang, Zhenghao Chen, Xinyu Liang, Luping Zhou
Characterizing Large Language Model Geometry Solves Toxicity Detection and Generation
Randall Balestriero, Romain Cosentino, Sarath Shekkizhar