GPT Generated

Research on GPT-generated text focuses on evaluating and improving the quality and human-likeness of text produced by large language models (LLMs), particularly in specialized domains like poetry generation and radiology report writing. Current efforts involve developing new evaluation metrics that better align with human judgment and exploring techniques to enhance the coherence and stylistic nuances of generated text, including fine-tuning LLMs with specific parameters and utilizing novel approaches like divergent N-gram analysis for detection. These advancements have implications for various fields, improving AI-assisted writing tools and raising important questions about the detection and attribution of AI-generated content.

Papers