MPT 7b Instruct
MPT-7b Instruct, and related instruction-tuned large language models (LLMs), aim to improve the ability of LLMs to perform diverse tasks by providing them with natural language instructions. Current research focuses on developing efficient instruction tuning methods, such as leveraging existing LLMs to generate high-quality training data and employing parameter-efficient fine-tuning techniques. This approach enhances LLMs' performance across various domains, including text summarization, image processing, and even scientific applications like drug discovery and protein analysis, demonstrating the potential for more versatile and human-aligned AI systems. The resulting models show improved performance on benchmark tasks compared to models trained without instruction tuning, highlighting the effectiveness of this paradigm.
Papers
InstructMol: Multi-Modal Integration for Building a Versatile and Reliable Molecular Assistant in Drug Discovery
He Cao, Zijing Liu, Xingyu Lu, Yuan Yao, Yu Li
Instruct2Attack: Language-Guided Semantic Adversarial Attacks
Jiang Liu, Chen Wei, Yuxiang Guo, Heng Yu, Alan Yuille, Soheil Feizi, Chun Pong Lau, Rama Chellappa