Prompting Based
Prompting-based techniques are revolutionizing how we interact with and improve large language models (LLMs), focusing on crafting effective input prompts to elicit desired outputs and behaviors. Current research explores diverse prompting strategies, including chain-of-thought prompting, few-shot learning, and various prompt engineering methods, often applied to models like GPT-3, LLaMA, and Gemini. This approach is significantly impacting fields like question answering, text generation, and even adversarial robustness, offering efficient alternatives to extensive model retraining and enabling more nuanced control over LLM capabilities.
Papers
October 25, 2024
September 2, 2024
August 14, 2024
May 25, 2024
May 2, 2024
April 1, 2024
March 26, 2024
December 9, 2023
November 16, 2023
October 17, 2023
October 2, 2023
September 13, 2023
July 14, 2023
April 23, 2023
April 5, 2022
March 21, 2022