LLM Generated
Research on Large Language Model (LLM) generated content focuses on understanding and mitigating the capabilities and limitations of LLMs in various applications. Current efforts concentrate on evaluating the quality and biases of LLM outputs, developing methods for detecting LLM-generated text, and exploring techniques to improve the alignment between LLM-generated and human-written text, often employing techniques like retrieval augmented generation (RAG) and parameter-efficient fine-tuning (PEFT). This research is crucial for responsible development and deployment of LLMs, addressing ethical concerns and ensuring the reliability of LLM-driven applications across diverse fields, from academic research to healthcare and legal contexts.
Papers
Developing Safe and Responsible Large Language Model : Can We Balance Bias Reduction and Language Understanding in Large Language Models?
Shaina Raza, Oluwanifemi Bamgbose, Shardul Ghuge, Fatemeh Tavakol, Deepak John Reji, Syed Raza Bashir
A Statistical Framework of Watermarks for Large Language Models: Pivot, Detection Efficiency and Optimal Rules
Xiang Li, Feng Ruan, Huiyuan Wang, Qi Long, Weijie J. Su