Large Relevance Improvement
Large relevance improvement research focuses on enhancing the performance and efficiency of various systems by optimizing existing models and algorithms. Current efforts concentrate on improving model architectures like transformers and convolutional neural networks, employing techniques such as parameter-efficient fine-tuning, dynamic loss weighting, and ensemble learning to achieve better accuracy, stability, and generalization. These advancements have significant implications across diverse fields, including computer vision, natural language processing, reinforcement learning, and scientific computing, leading to more robust and effective solutions in applications ranging from autonomous vehicles to biomedical image analysis.
Papers
Improving ICD-based semantic similarity by accounting for varying degrees of comorbidity
Jan Janosch Schneider, Marius Adler, Christoph Ammer-Herrmenau, Alexander Otto König, Ulrich Sax, Jonas Hügel
Automated Testing and Improvement of Named Entity Recognition Systems
Boxi Yu, Yiyan Hu, Qiuyang Mang, Wenhan Hu, Pinjia He