Paper ID: 2112.15210

Persformer: A Transformer Architecture for Topological Machine Learning

Raphael Reinauer, Matteo Caorsi, Nicolas Berkouk

One of the main challenges of Topological Data Analysis (TDA) is to extract features from persistent diagrams directly usable by machine learning algorithms. Indeed, persistence diagrams are intrinsically (multi-)sets of points in $\mathbb{R}^2$ and cannot be seen in a straightforward manner as vectors. In this article, we introduce $\texttt{Persformer}$, the first Transformer neural network architecture that accepts persistence diagrams as input. The $\texttt{Persformer}$ architecture significantly outperforms previous topological neural network architectures on classical synthetic and graph benchmark datasets. Moreover, it satisfies a universal approximation theorem. This allows us to introduce the first interpretability method for topological machine learning, which we explore in two examples.

Submitted: Dec 30, 2021