Related Task
Related task research focuses on improving the efficiency and effectiveness of machine learning models across diverse applications. Current efforts concentrate on developing novel algorithms and architectures, such as incorporating structured sparsity in multi-task learning and employing knowledge distillation in end-to-end models, to address challenges like data scarcity, computational cost, and generalization. These advancements are crucial for enhancing the performance of various tasks, including natural language processing, computer vision, and robotics, leading to more robust and efficient AI systems. The resulting improvements have significant implications for fields ranging from healthcare and finance to manufacturing and environmental monitoring.
Papers
Generating Accurate and Faithful Discharge Instructions: Task, Dataset, and Model
Fenglin Liu, Bang Yang, Chenyu You, Xian Wu, Shen Ge, Zhangdaihong Liu, Xu Sun, Yang Yang, David A. Clifton
Mapping Process for the Task: Wikidata Statements to Text as Wikipedia Sentences
Hoang Thang Ta, Alexander Gelbukha, Grigori Sidorov
Cross-document Event Coreference Search: Task, Dataset and Modeling
Alon Eirew, Avi Caciularu, Ido Dagan
EventGraph at CASE 2021 Task 1: A General Graph-based Approach to Protest Event Extraction
Huiling You, David Samuel, Samia Touileb, Lilja Øvrelid
A Hybrid System of Sound Event Detection Transformer and Frame-wise Model for DCASE 2022 Task 4
Yiming Li, Zhifang Guo, Zhirong Ye, Xiangdong Wang, Hong Liu, Yueliang Qian, Rui Tao, Long Yan, Kazushige Ouchi