Inference Based Adaptive Dropout

Inference-based adaptive dropout refines the traditional dropout regularization technique by dynamically adjusting the probability of dropping neurons or weights based on learned importance or Bayesian inference. Current research focuses on optimizing dropout strategies within various neural network architectures, including transformers and convolutional neural networks, often aiming for improved efficiency (e.g., through layer pruning) and robustness, particularly in resource-constrained environments like federated learning. This approach enhances model performance, uncertainty quantification, and energy efficiency, impacting fields ranging from trustworthy AI to efficient deep learning deployment on hardware platforms like FPGAs.

Papers