Paper ID: 2112.11478
LSH methods for data deduplication in a Wikipedia artificial dataset
Juan Ciro, Daniel Galvez, Tim Schlippe, David Kanter
This paper illustrates locality sensitive hasing (LSH) models for the identification and removal of nearly redundant data in a text dataset. To evaluate the different models, we create an artificial dataset for data deduplication using English Wikipedia articles. Area-Under-Curve (AUC) over 0.9 were observed for most models, with the best model reaching 0.96. Deduplication enables more effective model training by preventing the model from learning a distribution that differs from the real one as a result of the repeated data.
Submitted: Dec 10, 2021