Zero Shot Prompting
Zero-shot prompting aims to leverage large language models (LLMs) for tasks without any task-specific training data, relying solely on carefully crafted prompts. Current research focuses on improving prompt design, including techniques like chain-of-thought prompting and instance-adaptive prompting, to enhance reasoning capabilities across diverse tasks such as commonsense reasoning, question answering, and even visual-spatial reasoning. This approach offers a cost-effective and efficient way to adapt LLMs to new tasks, impacting fields like knowledge graph engineering and systematic review acceleration by potentially automating parts of these processes.
Papers
October 31, 2024
October 22, 2024
October 14, 2024
September 30, 2024
September 20, 2024
September 15, 2024
July 3, 2024
April 24, 2024
April 19, 2024
March 13, 2024
February 1, 2024
January 22, 2024
January 16, 2024
January 8, 2024
October 23, 2023
October 4, 2023
August 31, 2023
July 19, 2023
May 24, 2023