Paper ID: 2407.02263
FreeCG: Free the Design Space of Clebsch-Gordan Transform for Machine Learning Force Fields
Shihao Shao, Haoran Geng, Zun Wang, Qinghua Cui
Machine Learning Force Fields (MLFFs) are of great importance for chemistry, physics, materials science, and many other related fields. The Clebsch-Gordan Transform (CG transform) effectively encodes many-body interactions and is thus an important building block for many models of MLFFs. However, the permutation-equivariance requirement of MLFFs limits the design space of CG transform, that is, intensive CG transform has to be conducted for each neighboring edge and the operations should be performed in the same manner for all edges. This constraint results in reduced expressiveness of the model while simultaneously increasing computational demands. To overcome this challenge, we first implement the CG transform layer on the permutation-invariant abstract edges generated from real edge information. We show that this approach allows complete freedom in the design of the layer without compromising the crucial symmetry. Developing on this free design space, we further propose group CG transform with sparse path, abstract edges shuffling, and attention enhancer to form a powerful and efficient CG transform layer. Our method, known as FreeCG, achieves state-of-the-art (SOTA) results in force prediction for MD17, rMD17, MD22, and is well extended to property prediction in QM9 datasets with several improvements greater than 15% and the maximum beyond 20%. The extensive real-world applications showcase high practicality. FreeCG introduces a novel paradigm for carrying out efficient and expressive CG transform in future geometric neural network designs. To demonstrate this, the recent SOTA, QuinNet, is also enhanced under our paradigm. Code will be publicly available.
Submitted: Jul 2, 2024