Code Mixed

Code-mixing, the blending of multiple languages within a single text or conversation, is a prevalent linguistic phenomenon increasingly studied in natural language processing (NLP). Current research focuses on developing robust models, often leveraging transformer architectures like BERT and its variants, to perform tasks such as sentiment analysis, hate speech detection, and machine translation on code-mixed data, often addressing challenges posed by data scarcity through techniques like synthetic data generation and transfer learning. This research is significant for improving cross-lingual communication and building more inclusive NLP systems capable of understanding and generating text in diverse multilingual contexts, with applications ranging from social media monitoring to improved human-computer interaction.

Papers