Discrete Prompt

Discrete prompt optimization focuses on efficiently finding the best sequence of words or tokens to elicit desired outputs from large language models (LLMs), particularly in scenarios with limited data or black-box access. Current research emphasizes developing algorithms like Bayesian optimization and reinforcement learning to navigate the vast space of possible prompts, often employing techniques such as zeroth-order optimization to approximate gradients in the absence of model internals. This research is significant because effective prompt engineering can improve LLM performance across various NLP tasks, enhance model security by mitigating adversarial attacks, and enable more efficient and interpretable model adaptation for diverse applications.

Papers