Greedy Coordinate
Greedy coordinate methods are iterative optimization techniques that sequentially update a single variable at a time, aiming to efficiently find optimal solutions in high-dimensional spaces. Current research focuses on applying these methods to large language models (LLMs), particularly for post-training quantization to reduce computational costs and for adversarial attacks ("jailbreaking") to assess model safety. These advancements are significant because they improve the efficiency and practicality of deploying and securing LLMs, impacting both the development of more resource-efficient AI and the understanding of LLM vulnerabilities.
Papers
June 25, 2024
May 31, 2024
April 12, 2024
March 2, 2024
June 1, 2023
December 10, 2022
October 20, 2022
September 16, 2022
July 4, 2022