Activation Function
Activation functions are crucial components of neural networks, introducing nonlinearity to enable the learning of complex patterns from data. Current research focuses on developing novel activation functions, including those with learnable parameters, and exploring their impact within various architectures like Kolmogorov-Arnold Networks and transformers. These efforts aim to improve model performance, efficiency, and interpretability across diverse applications, from image classification and generation to solving partial differential equations and formal verification tasks. The ongoing search for optimal activation functions is driving significant advancements in the field of deep learning.
Papers
July 2, 2024
June 14, 2024
June 10, 2024
June 5, 2024
May 27, 2024
May 19, 2024
May 10, 2024
May 7, 2024
May 4, 2024
May 3, 2024
April 30, 2024
April 23, 2024
April 20, 2024
April 12, 2024
April 9, 2024
March 29, 2024
March 24, 2024
February 15, 2024