Instruction Generation
Instruction generation focuses on automatically creating high-quality instructions for various tasks, primarily to improve the performance of large language models (LLMs) and other AI agents. Current research emphasizes developing robust methods for generating diverse and complex instructions, often employing techniques like adversarial training, evolutionary algorithms, and chain-of-thought prompting within transformer-based architectures. This field is crucial for advancing AI capabilities across numerous domains, from robotics and virtual navigation to question answering and multimodal learning, by providing more effective training data and enabling more natural human-computer interaction.
Papers
VIGC: Visual Instruction Generation and Correction
Bin Wang, Fan Wu, Xiao Han, Jiahui Peng, Huaping Zhong, Pan Zhang, Xiaoyi Dong, Weijia Li, Wei Li, Jiaqi Wang, Conghui He
Harnessing the Power of David against Goliath: Exploring Instruction Data Generation without Using Closed-Source Models
Yue Wang, Xinrui Wang, Juntao Li, Jinxiong Chang, Qishen Zhang, Zhongyi Liu, Guannan Zhang, Min Zhang
Improving Translation Faithfulness of Large Language Models via Augmenting Instructions
Yijie Chen, Yijin Liu, Fandong Meng, Yufeng Chen, Jinan Xu, Jie Zhou