Generation Task
Generation tasks in artificial intelligence focus on creating novel outputs—text, images, audio, or even molecular structures—conditioned on various inputs. Current research emphasizes improving the quality, controllability, and fairness of generated outputs, often employing transformer-based models, diffusion models, and techniques like instruction tuning, retrieval augmentation, and Bayesian optimization to achieve this. These advancements are crucial for enhancing applications ranging from conversational AI and creative content generation to scientific discovery and drug design, driving significant progress in both the theoretical understanding and practical deployment of generative models. Furthermore, research is actively addressing challenges like mitigating biases, evaluating generated content effectively, and ensuring robustness against adversarial attacks.
Papers
A linguistic analysis of undesirable outcomes in the era of generative AI
Daniele Gambetta, Gizem Gezici, Fosca Giannotti, Dino Pedreschi, Alistair Knott, Luca Pappalardo
TV-3DG: Mastering Text-to-3D Customized Generation with Visual Prompt
Jiahui Yang, Donglin Di, Baorui Ma, Xun Yang, Yongjia Ma, Wenzhang Sun, Wei Chen, Jianxun Cui, Zhou Xue, Meng Wang, Yebin Liu