NLP Model
Natural Language Processing (NLP) models aim to enable computers to understand, interpret, and generate human language. Current research focuses on improving model robustness to noisy or user-generated content, enhancing explainability and interpretability through techniques like counterfactual explanations and latent concept attribution, and addressing biases related to fairness and privacy. These advancements are crucial for building reliable and trustworthy NLP systems with broad applications across various domains, including legal tech, healthcare, and social media analysis.
Papers
WYWEB: A NLP Evaluation Benchmark For Classical Chinese
Bo Zhou, Qianglong Chen, Tianyu Wang, Xiaomi Zhong, Yin Zhang
Preserving Knowledge Invariance: Rethinking Robustness Evaluation of Open Information Extraction
Ji Qi, Chuchun Zhang, Xiaozhi Wang, Kaisheng Zeng, Jifan Yu, Jinxin Liu, Jiuding Sun, Yuxiang Chen, Lei Hou, Juanzi Li, Bin Xu