Polynomial Activation
Polynomial activation functions in neural networks are being actively investigated as alternatives to traditional non-polynomial activations like ReLU, focusing on their impact on model expressivity, adversarial robustness, and efficiency in private inference settings. Research explores the theoretical limits of polynomial activations' approximation capabilities, comparing them to non-polynomial counterparts in various architectures, including shallow networks and graph neural networks (GNNs). These studies aim to understand the trade-offs between the computational advantages of polynomials and the potential loss in accuracy or robustness, ultimately seeking to improve the efficiency and security of neural network applications.
Papers
November 11, 2024
November 6, 2024
October 24, 2024
October 18, 2024
October 16, 2024
October 15, 2024
October 10, 2024
June 13, 2024
May 24, 2024
May 22, 2024
April 19, 2024
February 6, 2024
January 22, 2024
December 23, 2023
October 19, 2023
October 12, 2023
July 10, 2023
May 22, 2023