LLM Annotation
LLM annotation leverages large language models (LLMs) to automate the labeling of data for various natural language processing (NLP) tasks, aiming to reduce the cost and time associated with manual annotation. Current research focuses on optimizing LLM prompting strategies, developing methods to assess and improve LLM annotation accuracy (e.g., confidence-driven inference, multi-fidelity learning), and exploring the integration of LLM annotations with human expertise in a collaborative workflow. This approach holds significant promise for improving the efficiency and scalability of NLP research and development, particularly in resource-constrained settings and for tasks involving large or complex datasets.
Papers
November 17, 2024
November 13, 2024
October 24, 2024
October 16, 2024
October 11, 2024
August 27, 2024
July 12, 2024
June 30, 2024
June 27, 2024
June 25, 2024
June 20, 2024
June 19, 2024
June 17, 2024
April 2, 2024
March 30, 2024
March 27, 2024
February 28, 2024
February 21, 2024
February 16, 2024