LLM Generated
Research on Large Language Model (LLM) generated content focuses on understanding and mitigating the capabilities and limitations of LLMs in various applications. Current efforts concentrate on evaluating the quality and biases of LLM outputs, developing methods for detecting LLM-generated text, and exploring techniques to improve the alignment between LLM-generated and human-written text, often employing techniques like retrieval augmented generation (RAG) and parameter-efficient fine-tuning (PEFT). This research is crucial for responsible development and deployment of LLMs, addressing ethical concerns and ensuring the reliability of LLM-driven applications across diverse fields, from academic research to healthcare and legal contexts.
Papers
UICrit: Enhancing Automated Design Evaluation with a UICritique Dataset
Peitong Duan, Chin-yi Chen, Gang Li, Bjoern Hartmann, Yang Li
Speculative RAG: Enhancing Retrieval Augmented Generation through Drafting
Zilong Wang, Zifeng Wang, Long Le, Huaixiu Steven Zheng, Swaroop Mishra, Vincent Perot, Yuwei Zhang, Anush Mattapalli, Ankur Taly, Jingbo Shang, Chen-Yu Lee, Tomas Pfister
Multi-Meta-RAG: Improving RAG for Multi-Hop Queries using Database Filtering with LLM-Extracted Metadata
Mykhailo Poliakov, Nadiya Shvai
Analyzing Diversity in Healthcare LLM Research: A Scientometric Perspective
David Restrepo, Chenwei Wu, Constanza Vásquez-Venegas, João Matos, Jack Gallifant, Leo Anthony Celi, Danielle S. Bitterman, Luis Filipe Nakayama