Feature Perturbation

Feature perturbation involves strategically altering input features of machine learning models to analyze model behavior, improve robustness, or enhance model generalization. Current research focuses on applying feature perturbation in various contexts, including adversarial attacks, explainable AI, semi-supervised learning, and domain generalization, often employing techniques like consistency regularization and density-based methods. These investigations are crucial for improving the reliability, fairness, and generalizability of machine learning models across diverse applications, ranging from image recognition and natural language processing to network security and medical image analysis.

Papers