Input Transformation

Input transformation involves modifying data before it's fed into a machine learning model, aiming to improve model performance, robustness, or interpretability. Current research focuses on developing novel transformation methods, particularly for enhancing the transferability of adversarial examples and achieving equivariance in deep networks, often employing generative models, normalizing flows, and carefully designed geometric or block-based transformations. These advancements are significant for improving the security and reliability of deep learning systems, as well as enabling more efficient transfer learning and data augmentation techniques across diverse applications.

Papers