NLP Field
Natural Language Processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. Current research emphasizes improving model performance across diverse tasks, including question answering, text classification, and information extraction, often leveraging large language models (LLMs) and transformer architectures. These advancements are significantly impacting various fields, from healthcare (e.g., dementia detection, clinical data analysis) and legal (e.g., document processing, legal reasoning) to education and cybersecurity, by automating tasks and providing new analytical capabilities. A key challenge remains ensuring fairness, mitigating biases, and addressing privacy concerns within these powerful models.
Papers
ModelGPT: Unleashing LLM's Capabilities for Tailored Model Generation
Zihao Tang, Zheqi Lv, Shengyu Zhang, Fei Wu, Kun Kuang
Rethinking the Roles of Large Language Models in Chinese Grammatical Error Correction
Yinghui Li, Shang Qin, Haojing Huang, Yangning Li, Libo Qin, Xuming Hu, Wenhao Jiang, Hai-Tao Zheng, Philip S. Yu
Solving the Right Problem is Key for Translational NLP: A Case Study in UMLS Vocabulary Insertion
Bernal Jimenez Gutierrez, Yuqing Mao, Vinh Nguyen, Kin Wah Fung, Yu Su, Olivier Bodenreider
nlpBDpatriots at BLP-2023 Task 2: A Transfer Learning Approach to Bangla Sentiment Analysis
Dhiman Goswami, Md Nishat Raihan, Sadiya Sayara Chowdhury Puspo, Marcos Zampieri
nlpBDpatriots at BLP-2023 Task 1: A Two-Step Classification for Violence Inciting Text Detection in Bangla
Md Nishat Raihan, Dhiman Goswami, Sadiya Sayara Chowdhury Puspo, Marcos Zampieri