Linear Activation
Linear activation functions, while simpler than their nonlinear counterparts, are a focus of ongoing research in neural networks due to their potential for improved training stability and theoretical analysis. Current investigations explore their use in various architectures, including multi-layer perceptrons and vision transformers, often in conjunction with techniques like batch normalization or homotopy relaxation to address limitations like gradient explosion or the lack of a compression phase. Understanding the behavior of linear activations, particularly in relation to issues like noise resilience and robustness certification, is crucial for advancing both the theoretical foundations and practical applications of neural networks.
Papers
October 11, 2024
February 18, 2024
October 3, 2023
September 26, 2023
August 25, 2023
August 11, 2023
May 19, 2023
May 17, 2023
April 8, 2023
March 23, 2023
February 7, 2023
June 15, 2022
June 2, 2022
February 18, 2022