Safety Fine Tuning
Safety fine-tuning aims to mitigate the risks of large language models (LLMs) generating harmful content, even after training on seemingly benign data. Current research focuses on methods like adjusting internal model representations, modifying learning rates during fine-tuning, and developing specialized datasets to improve safety without sacrificing helpfulness, addressing issues like jailbreaks and refusal biases. These efforts are crucial for responsible LLM deployment, impacting both the trustworthiness of AI systems and the development of robust safety evaluation methodologies.
Papers
September 14, 2023
September 21, 2022