Greedy Coordinate

Greedy coordinate methods are iterative optimization techniques that sequentially update a single variable at a time, aiming to efficiently find optimal solutions in high-dimensional spaces. Current research focuses on applying these methods to large language models (LLMs), particularly for post-training quantization to reduce computational costs and for adversarial attacks ("jailbreaking") to assess model safety. These advancements are significant because they improve the efficiency and practicality of deploying and securing LLMs, impacting both the development of more resource-efficient AI and the understanding of LLM vulnerabilities.

Papers