Paper ID: 2312.14406
Generative Pretraining at Scale: Transformer-Based Encoding of Transactional Behavior for Fraud Detection
Ze Yu Zhao, Zheng Zhu, Guilin Li, Wenhan Wang, Bo Wang
In this work, we introduce an innovative autoregressive model leveraging Generative Pretrained Transformer (GPT) architectures, tailored for fraud detection in payment systems. Our approach innovatively confronts token explosion and reconstructs behavioral sequences, providing a nuanced understanding of transactional behavior through temporal and contextual analysis. Utilizing unsupervised pretraining, our model excels in feature representation without the need for labeled data. Additionally, we integrate a differential convolutional approach to enhance anomaly detection, bolstering the security and efficacy of one of the largest online payment merchants in China. The scalability and adaptability of our model promise broad applicability in various transactional contexts.
Submitted: Dec 22, 2023