Inference Based Adaptive Dropout
Inference-based adaptive dropout refines the traditional dropout regularization technique by dynamically adjusting the probability of dropping neurons or weights based on learned importance or Bayesian inference. Current research focuses on optimizing dropout strategies within various neural network architectures, including transformers and convolutional neural networks, often aiming for improved efficiency (e.g., through layer pruning) and robustness, particularly in resource-constrained environments like federated learning. This approach enhances model performance, uncertainty quantification, and energy efficiency, impacting fields ranging from trustworthy AI to efficient deep learning deployment on hardware platforms like FPGAs.
Papers
June 23, 2024
June 6, 2024
July 14, 2023
February 13, 2023
June 2, 2022
March 11, 2022
February 7, 2022