Learning Probabilistic Symmetrization

Learning probabilistic symmetrization focuses on developing methods to make neural networks equivariant to group symmetries, meaning their outputs transform predictably under group actions. Current research explores diverse approaches, including optimizing over group orbits to achieve equivariance for various groups (e.g., rotation, Lorentz) and using small equivariant networks to parameterize probabilistic symmetrization of arbitrary base models like MLPs or transformers. This work is significant because it allows leveraging the power of standard architectures while ensuring desired symmetries, improving performance and generalizability in applications involving structured data like point clouds and graphs, and potentially advancing fields like quantum physics through efficient antisymmetrization techniques.

Papers