LLM Generated
Research on Large Language Model (LLM) generated content focuses on understanding and mitigating the capabilities and limitations of LLMs in various applications. Current efforts concentrate on evaluating the quality and biases of LLM outputs, developing methods for detecting LLM-generated text, and exploring techniques to improve the alignment between LLM-generated and human-written text, often employing techniques like retrieval augmented generation (RAG) and parameter-efficient fine-tuning (PEFT). This research is crucial for responsible development and deployment of LLMs, addressing ethical concerns and ensuring the reliability of LLM-driven applications across diverse fields, from academic research to healthcare and legal contexts.
Papers
February 18, 2024
February 14, 2024
February 11, 2024
February 8, 2024
February 2, 2024
February 1, 2024
January 26, 2024
January 25, 2024
January 11, 2024
December 27, 2023
December 19, 2023
December 8, 2023
December 3, 2023
November 2, 2023
October 31, 2023
October 24, 2023
October 23, 2023
October 18, 2023