Rectified Linear Unit
Rectified Linear Units (ReLUs) are a fundamental activation function in artificial neural networks, crucial for enabling efficient training and powerful representation capabilities. Current research focuses on understanding and improving ReLU's properties, including addressing its limitations in adversarial robustness and exploring its role in various architectures like deep survival models (SurvReLU) and efficient private inference networks (xMLP). This work is significant because it enhances both the theoretical understanding of neural network behavior and the practical performance of applications ranging from medical prognosis to large-scale recommendation systems, improving accuracy, efficiency, and interpretability.
Papers
June 27, 2022
June 5, 2022
May 24, 2022
May 13, 2022
February 14, 2022
February 13, 2022
November 25, 2021
November 17, 2021