Nonlinear Activation
Nonlinear activation functions are crucial components of neural networks, enabling them to learn complex, non-linear relationships in data. Current research focuses on understanding the role of these activations in network dynamics, exploring techniques to induce sparsity for efficiency, and investigating the impact of activation linearity on model performance and fairness. This research is significant because it improves our fundamental understanding of neural network behavior, leading to more efficient, robust, and equitable models with applications across diverse fields like computer vision and natural language processing.
Papers
Neural Rank Collapse: Weight Decay and Small Within-Class Variability Yield Low-Rank Bias
Emanuele Zangrando, Piero Deidda, Simone Brugiapaglia, Nicola Guglielmi, Francesco Tudisco
Disparate Impact on Group Accuracy of Linearization for Private Inference
Saswat Das, Marco Romanelli, Ferdinando Fioretto