Rectified Linear Unit
Rectified Linear Units (ReLUs) are a fundamental activation function in artificial neural networks, crucial for enabling efficient training and powerful representation capabilities. Current research focuses on understanding and improving ReLU's properties, including addressing its limitations in adversarial robustness and exploring its role in various architectures like deep survival models (SurvReLU) and efficient private inference networks (xMLP). This work is significant because it enhances both the theoretical understanding of neural network behavior and the practical performance of applications ranging from medical prognosis to large-scale recommendation systems, improving accuracy, efficiency, and interpretability.
Papers
November 15, 2024
September 17, 2024
July 19, 2024
July 2, 2024
June 9, 2024
May 6, 2024
March 12, 2024
December 27, 2023
July 25, 2023
May 15, 2023
May 8, 2023
April 12, 2023
April 10, 2023
March 20, 2023
March 6, 2023
December 1, 2022
November 23, 2022
October 24, 2022
August 3, 2022