SemEval 2022 Task
SemEval 2024 encompassed a series of shared tasks focused on advancing natural language processing (NLP), particularly in challenging areas like commonsense reasoning, biomedical text understanding, and machine-generated text detection. Research heavily utilized large language models (LLMs) such as BERT, RoBERTa, and various others, often incorporating techniques like chain-of-thought prompting, data augmentation, and in-context learning to improve performance on diverse tasks. These advancements contribute to a broader understanding of LLM capabilities and limitations, with implications for applications ranging from clinical decision support to combating misinformation.
Papers
OPDAI at SemEval-2024 Task 6: Small LLMs can Accelerate Hallucination Detection with Weakly Supervised Data
Chengcheng Wei, Ze Chen, Songtan Fang, Jiarong He, Max Gao
UMBCLU at SemEval-2024 Task 1A and 1C: Semantic Textual Relatedness with and without machine translation
Shubhashis Roy Dipta, Sai Vallurupalli
Team QUST at SemEval-2024 Task 8: A Comprehensive Study of Monolingual and Multilingual Approaches for Detecting AI-generated Text
Xiaoman Xu, Xiangrun Li, Taihang Wang, Jianxiang Tian, Ye Jiang
HU at SemEval-2024 Task 8A: Can Contrastive Learning Learn Embeddings to Detect Machine-Generated Text?
Shubhashis Roy Dipta, Sadat Shahriar
RFBES at SemEval-2024 Task 8: Investigating Syntactic and Semantic Features for Distinguishing AI-Generated and Human-Written Texts
Mohammad Heydari Rad, Farhan Farsi, Shayan Bali, Romina Etezadi, Mehrnoush Shamsfard