Faithful Generation
Faithful generation focuses on creating outputs—text, images, audio, code, or other data—that accurately reflect a given input or prompt, prioritizing correctness and adherence to specifications. Current research emphasizes improving the fidelity and controllability of generation using various model architectures, including diffusion models, transformers, and variational autoencoders, often incorporating techniques like retrieval-augmented generation and multi-agent frameworks. This field is significant for advancing AI capabilities across numerous domains, from improving large language model evaluations and enhancing human-computer interaction to creating more realistic synthetic data for training and analysis in various scientific fields.
Papers
TAGE: Trustworthy Attribute Group Editing for Stable Few-shot Image Generation
Ruicheng Zhang, Guoheng Huang, Yejing Huo, Xiaochen Yuan, Zhizhen Zhou, Xuhang Chen, Guo Zhong
Understanding When Tree of Thoughts Succeeds: Larger Models Excel in Generation, Not Discrimination
Qiqi Chen, Xinpeng Wang, Philipp Mondorf, Michael A. Hedderich, Barbara Plank
R2Gen-Mamba: A Selective State Space Model for Radiology Report Generation
Yongheng Sun, Yueh Z. Lee, Genevieve A. Woodard, Hongtu Zhu, Chunfeng Lian, Mingxia Liu
ARCADE: Scalable Demonstration Collection and Generation via Augmented Reality for Imitation Learning
Yue Yang, Bryce Ikeda, Gedas Bertasius, Daniel Szafir
Self-Explained Keywords Empower Large Language Models for Code Generation
Lishui Fan, Mouxiang Chen, Zhongxin Liu
Improving Parallel Program Performance Through DSL-Driven Code Generation with LLM Optimizers
Anjiang Wei, Allen Nie, Thiago S. F. X. Teixeira, Rohan Yadav, Wonchan Lee, Ke Wang, Alex Aiken
Automating Video Thumbnails Selection and Generation with Multimodal and Multistage Analysis
Elia Fantini
HiCo: Hierarchical Controllable Diffusion Model for Layout-to-image Generation
Bo Cheng, Yuhang Ma, Liebucha Wu, Shanyuan Liu, Ao Ma, Xiaoyu Wu, Dawei Leng, Yuhui Yin
Step Guided Reasoning: Improving Mathematical Reasoning using Guidance Generation and Step Reasoning
Lang Cao, Chao Peng, Yitong Li
Towards Cross-Cultural Machine Translation with Retrieval-Augmented Generation from Multilingual Knowledge Graphs
Simone Conia, Daniel Lee, Min Li, Umar Farooq Minhas, Saloni Potdar, Yunyao Li
Janus: Decoupling Visual Encoding for Unified Multimodal Understanding and Generation
Chengyue Wu, Xiaokang Chen, Zhiyu Wu, Yiyang Ma, Xingchao Liu, Zizheng Pan, Wen Liu, Zhenda Xie, Xingkai Yu, Chong Ruan, Ping Luo
Generation through the lens of learning theory
Jiaxun Li, Vinod Raman, Ambuj Tewari
Evaluating Self-Generated Documents for Enhancing Retrieval-Augmented Generation with Large Language Models
Jiatao Li, Xinyu Hu, Xunjian Yin, Xiaojun Wan
GeSubNet: Gene Interaction Inference for Disease Subtype Network Generation
Ziwei Yang, Zheng Chen, Xin Liu, Rikuto Kotoge, Peng Chen, Yasuko Matsubara, Yasushi Sakurai, Jimeng Sun
Boosting Imperceptibility of Stable Diffusion-based Adversarial Examples Generation with Momentum
Nashrah Haque, Xiang Li, Zhehui Chen, Yanzhao Wu, Lei Yu, Arun Iyengar, Wenqi Wei