Faithful Generation
Faithful generation focuses on creating outputs—text, images, audio, code, or other data—that accurately reflect a given input or prompt, prioritizing correctness and adherence to specifications. Current research emphasizes improving the fidelity and controllability of generation using various model architectures, including diffusion models, transformers, and variational autoencoders, often incorporating techniques like retrieval-augmented generation and multi-agent frameworks. This field is significant for advancing AI capabilities across numerous domains, from improving large language model evaluations and enhancing human-computer interaction to creating more realistic synthetic data for training and analysis in various scientific fields.
Papers
Steering Rectified Flow Models in the Vector Field for Controlled Image Generation
Maitreya Patel, Song Wen, Dimitris N. Metaxas, Yezhou Yang
Enhancing MMDiT-Based Text-to-Image Models for Similar Subject Generation
Tianyi Wei, Dongdong Chen, Yifan Zhou, Xingang Pan
MotionCharacter: Identity-Preserving and Motion Controllable Human Video Generation
Haopeng Fang, Di Qiu, Binjie Mao, Pengfei Yan, He Tang
SALMONN-omni: A Codec-free LLM for Full-duplex Speech Understanding and Generation
Wenyi Yu, Siyin Wang, Xiaoyu Yang, Xianzhao Chen, Xiaohai Tian, Jun Zhang, Guangzhi Sun, Lu Lu, Yuxuan Wang, Chao Zhang
Training and Evaluating Language Models with Template-based Data Generation
Yifan Zhang
Harnessing LLMs for Educational Content-Driven Italian Crossword Generation
Kamyar Zeinalipour, Achille Fusco, Asya Zanollo, Marco Maggini, Marco Gori
Factorized Visual Tokenization and Generation
Zechen Bai, Jianxiong Gao, Ziteng Gao, Pichao Wang, Zheng Zhang, Tong He, Mike Zheng Shou
From Generation to Judgment: Opportunities and Challenges of LLM-as-a-judge
Dawei Li, Bohan Jiang, Liangjie Huang, Alimohammad Beigi, Chengshuai Zhao, Zhen Tan, Amrita Bhattacharjee, Yuxuan Jiang, Canyu Chen, Tianhao Wu, Kai Shu, Lu Cheng, Huan Liu
Ca2-VDM: Efficient Autoregressive Video Diffusion Model with Causal Generation and Cache Sharing
Kaifeng Gao, Jiaxin Shi, Hanwang Zhang, Chunping Wang, Jun Xiao, Long Chen
UniPose: A Unified Multimodal Framework for Human Pose Comprehension, Generation and Editing
Yiheng Li, Ruibing Hou, Hong Chang, Shiguang Shan, Xilin Chen
mR$^2$AG: Multimodal Retrieval-Reflection-Augmented Generation for Knowledge-Based VQA
Tao Zhang, Ziqi Zhang, Zongyang Ma, Yuxin Chen, Zhongang Qi, Chunfeng Yuan, Bing Li, Junfu Pu, Yuxuan Zhao, Zehua Xie, Jin Ma, Ying Shan, Weiming Hu
Morph: A Motion-free Physics Optimization Framework for Human Motion Generation
Zhuo Li, Mingshuang Luo, Ruibing Hou, Xin Zhao, Hao Liu, Hong Chang, Zimo Liu, Chen Li
IRLab@iKAT24: Learned Sparse Retrieval with Multi-aspect LLM Query Generation for Conversational Search
Simon Lupart, Zahra Abbasiantaeb, Mohammad Aliannejadi