NLP Research
Natural Language Processing (NLP) research focuses on enabling computers to understand, interpret, and generate human language. Current efforts concentrate on improving the performance and efficiency of large language models (LLMs), often using transformer-based architectures, across diverse tasks such as text classification, translation, and question answering, while also addressing issues of bias, fairness, and reproducibility. This field is crucial for advancing both scientific understanding of language and for developing practical applications in various domains, including education, healthcare, and information retrieval. A significant challenge lies in bridging the gap between benchmark performance and real-world user needs, requiring a shift towards more ecologically valid evaluation methods.
Papers
Advancing Prompt Recovery in NLP: A Deep Dive into the Integration of Gemma-2b-it and Phi2 Models
Jianlong Chen, Wei Xu, Zhicheng Ding, Jinxin Xu, Hao Yan, Xinyu Zhang
Large Language Model as an Assignment Evaluator: Insights, Feedback, and Challenges in a 1000+ Student Course
Cheng-Han Chiang, Wei-Chih Chen, Chun-Yi Kuan, Chienchou Yang, Hung-yi Lee