Polynomial Activation

Polynomial activation functions in neural networks are being actively investigated as alternatives to traditional non-polynomial activations like ReLU, focusing on their impact on model expressivity, adversarial robustness, and efficiency in private inference settings. Research explores the theoretical limits of polynomial activations' approximation capabilities, comparing them to non-polynomial counterparts in various architectures, including shallow networks and graph neural networks (GNNs). These studies aim to understand the trade-offs between the computational advantages of polynomials and the potential loss in accuracy or robustness, ultimately seeking to improve the efficiency and security of neural network applications.

Papers