Preserving Transformation

Preserving Transformation in machine learning and computer science focuses on modifying data or models while maintaining their essential semantic meaning or functionality. Current research emphasizes developing and evaluating techniques for achieving this preservation across diverse domains, including code generation, recommendation systems, image processing, and program analysis, often employing methods like randomized smoothing, normalizing flows, and group-theoretic frameworks. These advancements are crucial for improving the robustness, efficiency, and explainability of machine learning models, as well as for enabling new capabilities in program analysis and data augmentation. The ultimate goal is to create more reliable, efficient, and interpretable systems.

Papers