Human Editing
Human-computer interaction is being revolutionized by advancements in "human editing," encompassing the ability to generate and modify various data modalities (images, audio, text, 3D models) using natural language instructions or other forms of user input. Current research heavily utilizes diffusion models and large language models (LLMs), often integrated within multimodal frameworks, to achieve precise and flexible control over the editing process, addressing challenges like hallucination and ambiguity. This field is significant for its potential to improve accessibility in creative fields, enhance the efficiency of content creation, and advance our understanding of how humans interact with and interpret AI-generated content.
Papers
Mechanistic Unlearning: Robust Knowledge Unlearning and Editing via Mechanistic Localization
Phillip Guo, Aaquib Syed, Abhay Sheshadri, Aidan Ewart, Gintare Karolina Dziugaite
Shaping a Stabilized Video by Mitigating Unintended Changes for Concept-Augmented Video Editing
Mingce Guo, Jingxuan He, Shengeng Tang, Zhangye Wang, Lechao Cheng
Tell Me What You Don't Know: Enhancing Refusal Capabilities of Role-Playing Agents via Representation Space Analysis and Editing
Wenhao Liu, Siyu An, Junru Lu, Muling Wu, Tianlong Li, Xiaohua Wang, Xiaoqing Zheng, Di Yin, Xing Sun, Xuanjing Huang
Prompt Sliders for Fine-Grained Control, Editing and Erasing of Concepts in Diffusion Models
Deepak Sridhar, Nuno Vasconcelos
LLM Surgery: Efficient Knowledge Unlearning and Editing in Large Language Models
Akshaj Kumar Veldanda, Shi-Xiong Zhang, Anirban Das, Supriyo Chakraborty, Stephen Rawls, Sambit Sahu, Milind Naphade
Generation and Editing of Mandrill Faces: Application to Sex Editing and Assessment
Nicolas M. Dibot, Julien P. Renoult, William Puech