Conversational Large Language Model
Conversational large language models (LLMs) aim to create AI systems capable of engaging in natural, coherent, and safe dialogues with humans. Current research focuses on improving efficiency (e.g., optimizing token usage), enhancing alignment with human conversational norms through various decoding methods and reinforcement learning, and mitigating security vulnerabilities like backdoor attacks. These advancements are crucial for deploying LLMs in diverse applications, from customer service to healthcare, while addressing ethical concerns around bias, safety, and misuse.
Papers
November 7, 2024
October 21, 2024
October 1, 2024
July 28, 2024
July 4, 2024
July 3, 2024
Improving Conversational Abilities of Quantized Large Language Models via Direct Preference Alignment
Janghwan Lee, Seongmin Park, Sukjin Hong, Minsoo Kim, Du-Seong Chang, Jungwook Choi
JailbreakHunter: A Visual Analytics Approach for Jailbreak Prompts Discovery from Large-Scale Human-LLM Conversational Datasets
Zhihua Jin, Shiyi Liu, Haotian Li, Xun Zhao, Huamin Qu
June 17, 2024
May 18, 2024
May 17, 2024
April 9, 2024
March 27, 2024
March 21, 2024
March 4, 2024
February 28, 2024
February 18, 2024
February 11, 2024
February 2, 2024
September 21, 2023