Generative Pre Trained Transformer
Generative Pre-trained Transformers (GPTs) are large language models designed to generate human-like text and increasingly, other data modalities. Current research focuses on improving GPT performance across diverse applications, including code generation, scientific modeling (e.g., seismic velocity modeling, cellular automata simulation), and healthcare (e.g., EHR generation, medical image analysis), often employing techniques like retrieval-augmented generation and in-context learning to enhance capabilities. The ability of GPTs to process and generate various data types makes them a powerful tool with significant implications for numerous fields, driving advancements in both scientific understanding and practical applications.
Papers
BiomedGPT: A Unified and Generalist Biomedical Generative Pre-trained Transformer for Vision, Language, and Multimodal Tasks
Kai Zhang, Jun Yu, Eashan Adhikarla, Rong Zhou, Zhiling Yan, Yixin Liu, Zhengliang Liu, Lifang He, Brian Davison, Xiang Li, Hui Ren, Sunyang Fu, James Zou, Wei Liu, Jing Huang, Chen Chen, Yuyin Zhou, Tianming Liu, Xun Chen, Yong Chen, Quanzheng Li, Hongfang Liu, Lichao Sun
Distinguishing Human Generated Text From ChatGPT Generated Text Using Machine Learning
Niful Islam, Debopom Sutradhar, Humaira Noor, Jarin Tasnim Raya, Monowara Tabassum Maisha, Dewan Md Farid