Invariant Dropout

Invariant dropout is a regularization technique in deep learning that modifies the standard dropout method to selectively drop neurons based on criteria beyond simple randomness, aiming to improve model generalization and efficiency. Current research focuses on adapting dropout for specific architectures (e.g., convolutional neural networks, transformers) and training contexts (e.g., federated learning, energy-constrained devices), exploring variations like saliency-guided dropout and conductance-based dropout to enhance performance and robustness. These advancements address overfitting, improve model interpretability through attribution methods, and tackle challenges in resource-limited environments, ultimately contributing to more reliable and efficient deep learning models.

Papers