KDD Cup
KDD Cup competitions represent a significant benchmark for evaluating advancements in various data science and machine learning domains. Recent competitions have focused on improving Large Language Model (LLM) performance in tasks like e-commerce question answering and multi-document question answering, often employing techniques like instruction tuning, quantization, and ensemble methods with architectures such as BERT and various Graph Neural Networks (GNNs). These challenges drive innovation in model design and training strategies, leading to improved performance and efficiency in real-world applications such as personalized search, conversational AI, and wind power forecasting. The open-source nature of many winning solutions further accelerates progress within the broader research community.
Papers
Yunshan Cup 2020: Overview of the Part-of-Speech Tagging Task for Low-resourced Languages
Yingwen Fu, Jinyi Chen, Nankai Lin, Xixuan Huang, Xinying Qiu, Shengyi Jiang
Bridging the Gap of AutoGraph between Academia and Industry: Analysing AutoGraph Challenge at KDD Cup 2020
Zhen Xu, Lanning Wei, Huan Zhao, Rex Ying, Quanming Yao, Wei-Wei Tu, Isabelle Guyon