Canonicalization Function

Canonicalization functions aim to transform data into a standardized, consistent representation, removing redundancy and ambiguity while preserving essential information. Current research focuses on developing and applying these functions across diverse data types, including knowledge graphs, images, and 3D point clouds, often leveraging neural networks, particularly those with inherent equivariance properties, or pretrained language models for improved performance. This work is significant because canonicalized data facilitates more robust and efficient machine learning algorithms, enabling advancements in areas such as knowledge graph reasoning, object recognition, and 3D shape analysis.

Papers