Faithful Generation
Faithful generation focuses on creating outputs—text, images, audio, code, or other data—that accurately reflect a given input or prompt, prioritizing correctness and adherence to specifications. Current research emphasizes improving the fidelity and controllability of generation using various model architectures, including diffusion models, transformers, and variational autoencoders, often incorporating techniques like retrieval-augmented generation and multi-agent frameworks. This field is significant for advancing AI capabilities across numerous domains, from improving large language model evaluations and enhancing human-computer interaction to creating more realistic synthetic data for training and analysis in various scientific fields.
Papers
MAIN-RAG: Multi-Agent Filtering Retrieval-Augmented Generation
Chia-Yuan Chang, Zhimeng Jiang, Vineeth Rakesh, Menghai Pan, Chin-Chia Michael Yeh, Guanchu Wang, Mingzhi Hu, Zhichao Xu, Yan Zheng, Mahashweta Das, Na Zou
The Potential of LLMs in Automating Software Testing: From Generation to Reporting
Betim Sherifi, Khaled Slhoub, Fitzroy Nembhard
ChatGarment: Garment Estimation, Generation and Editing via Large Language Models
Siyuan Bian, Chenghao Xu, Yuliang Xiu, Artur Grigorev, Zhen Liu, Cewu Lu, Michael J. Black, Yao Feng
Benchmarking Generative AI Models for Deep Learning Test Input Generation
Maryam, Matteo Biagiola, Andrea Stocco, Vincenzo Riccio
LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
Kai Ruan, Xuan Wang, Jixiang Hong, Peng Wang, Yang Liu, Hao Sun
A Reality Check on Context Utilisation for Retrieval-Augmented Generation
Lovisa Hagström, Sara Vera Marjanović, Haeun Yu, Arnav Arora, Christina Lioma, Maria Maistro, Pepa Atanasova, Isabelle Augenstein
Map Imagination Like Blind Humans: Group Diffusion Model for Robotic Map Generation
Qijin Song, Weibang Bai
Visual Prompting with Iterative Refinement for Design Critique Generation
Peitong Duan, Chin-Yi Chen, Bjoern Hartmann, Yang Li