Input Transformation
Input transformation involves modifying data before it's fed into a machine learning model, aiming to improve model performance, robustness, or interpretability. Current research focuses on developing novel transformation methods, particularly for enhancing the transferability of adversarial examples and achieving equivariance in deep networks, often employing generative models, normalizing flows, and carefully designed geometric or block-based transformations. These advancements are significant for improving the security and reliability of deep learning systems, as well as enabling more efficient transfer learning and data augmentation techniques across diverse applications.
Papers
October 1, 2024
July 22, 2024
September 26, 2023
August 26, 2023
August 20, 2023
May 17, 2023
November 23, 2022
November 22, 2022
October 14, 2022
June 18, 2022
April 19, 2022