Paper ID: 2407.10115
A Bag of Tricks for Scaling CPU-based Deep FFMs to more than 300m Predictions per Second
Blaž Škrlj, Benjamin Ben-Shalom, Grega Gašperšič, Adi Schwartz, Ramzi Hoseisi, Naama Ziporin, Davorin Kopič, Andraž Tori
Field-aware Factorization Machines (FFMs) have emerged as a powerful model for click-through rate prediction, particularly excelling in capturing complex feature interactions. In this work, we present an in-depth analysis of our in-house, Rust-based Deep FFM implementation, and detail its deployment on a CPU-only, multi-data-center scale. We overview key optimizations devised for both training and inference, demonstrated by previously unpublished benchmark results in efficient model search and online training. Further, we detail an in-house weight quantization that resulted in more than an order of magnitude reduction in bandwidth footprint related to weight transfers across data-centres. We disclose the engine and associated techniques under an open-source license to contribute to the broader machine learning community. This paper showcases one of the first successful CPU-only deployments of Deep FFMs at such scale, marking a significant stride in practical, low-footprint click-through rate prediction methodologies.
Submitted: Jul 14, 2024