Paper ID: 2411.06068
Zyda-2: a 5 Trillion Token High-Quality Dataset
Yury Tokpanov, Paolo Glorioso, Quentin Anthony, Beren Millidge
In this technical report, we present Zyda-2: a five trillion token dataset for language model pretraining. Zyda-2 was used to train our Zamba2 series of models which are state-of-the-art for their weight class. We build Zyda-2 by collating high-quality open-source tokens such as FineWeb and DCLM, then distilling them to the highest-quality subset via cross-deduplication and model-based quality filtering. Zyda-2 is released under a permissive open license, and is available at this https URL
Submitted: Nov 9, 2024