Paper ID: 2203.07648

Contrastive Learning of Sociopragmatic Meaning in Social Media

Chiyu Zhang, Muhammad Abdul-Mageed, Ganesh Jawahar

Recent progress in representation and contrastive learning in NLP has not widely considered the class of \textit{sociopragmatic meaning} (i.e., meaning in interaction within different language communities). To bridge this gap, we propose a novel framework for learning task-agnostic representations transferable to a wide range of sociopragmatic tasks (e.g., emotion, hate speech, humor, sarcasm). Our framework outperforms other contrastive learning frameworks for both in-domain and out-of-domain data, across both the general and few-shot settings. For example, compared to two popular pre-trained language models, our method obtains an improvement of $11.66$ average $F_1$ on $16$ datasets when fine-tuned on only $20$ training samples per dataset.Our code is available at: https://github.com/UBC-NLP/infodcl

Submitted: Mar 15, 2022