Feature Perturbation
Feature perturbation involves strategically altering input features of machine learning models to analyze model behavior, improve robustness, or enhance model generalization. Current research focuses on applying feature perturbation in various contexts, including adversarial attacks, explainable AI, semi-supervised learning, and domain generalization, often employing techniques like consistency regularization and density-based methods. These investigations are crucial for improving the reliability, fairness, and generalizability of machine learning models across diverse applications, ranging from image recognition and natural language processing to network security and medical image analysis.
Papers
November 8, 2024
October 31, 2024
September 13, 2024
September 11, 2024
August 25, 2024
August 8, 2024
July 17, 2024
April 4, 2024
March 11, 2024
November 30, 2023
August 2, 2023
July 24, 2023
June 3, 2023
April 22, 2023
February 22, 2023
February 6, 2023
January 30, 2023
November 9, 2022
September 13, 2022
August 21, 2022