Inter Part Equivariance
Inter-part equivariance in machine learning focuses on designing neural networks that maintain consistent outputs under transformations applied to individual components of an input, such as rotations of objects within a scene or permutations of nodes in a graph. Current research emphasizes developing architectures like equivariant graph neural networks (EGNNs) and Kolmogorov-Arnold Networks (KANs), often incorporating Clifford algebras or Fourier methods, to achieve this equivariance for various symmetry groups (e.g., SE(3), SO(3)). This research is significant because incorporating such inductive biases improves model efficiency, generalization, and robustness, particularly in applications involving geometric data like point clouds, molecules, and multi-agent systems.
Papers
E$^3$-Net: Efficient E(3)-Equivariant Normal Estimation Network
Hanxiao Wang, Mingyang Zhao, Weize Quan, Zhen Chen, Dong-ming Yan, Peter Wonka
Contrastive Learning Via Equivariant Representation
Sifan Song, Jinfeng Wang, Qiaochu Zhao, Xiang Li, Dufan Wu, Angelos Stefanidis, Jionglong Su, S. Kevin Zhou, Quanzheng Li