Approximate Transformation Invariance
Approximate transformation invariance in machine learning aims to build models robust to variations in input data caused by transformations like translations or rotations, a crucial property for generalizability and real-world applicability. Current research focuses on developing methods that achieve this invariance, employing techniques such as frame averaging, pre-classifier restoration, and learning invariant subspaces within models like Koopman operators, often leveraging neural networks for efficient implementation. These advancements improve model performance and efficiency across diverse applications, including physics simulations, signal processing, and reinforcement learning, by reducing sensitivity to irrelevant input variations.
Papers
Learning Invariant Subspaces of Koopman Operators--Part 2: Heterogeneous Dictionary Mixing to Approximate Subspace Invariance
Charles A. Johnson, Shara Balakrishnan, Enoch Yeung
Learning Invariant Subspaces of Koopman Operators--Part 1: A Methodology for Demonstrating a Dictionary's Approximate Subspace Invariance
Charles A. Johnson, Shara Balakrishnan, Enoch Yeung