Illicit Content

Illicit content detection focuses on automatically identifying and removing harmful materials like child sexual abuse material (CSAM) and content promoting unsafe online games from various platforms, including social media and the dark web. Current research emphasizes the development of robust machine learning models, such as end-to-end classifiers, Siamese neural networks for few-shot learning, and large vision-language models, to improve accuracy and interpretability in identifying diverse forms of illicit content. These advancements are crucial for enhancing online safety, supporting law enforcement efforts, and mitigating the harms associated with the spread of illegal and harmful materials. The field also explores methods for early detection of illicit activities in areas like cryptocurrency transactions, leveraging techniques like decision-tree based feature selection and attention mechanisms to improve both speed and interpretability.

Papers