Exclusive Square Activation
Exclusive square activation functions are gaining traction in deep learning, particularly within the context of private inference, where their computational efficiency significantly reduces latency compared to traditional activation functions like ReLU. Research focuses on mitigating the accuracy loss typically associated with square activations, employing novel architectures like xMLP to achieve performance parity with ReLU-based models. This improved efficiency is crucial for enhancing the practicality of privacy-preserving machine learning applications, while also impacting broader areas like time series analysis through optimized kernel learning methods.
Papers
August 25, 2024
August 12, 2024
March 12, 2024
September 21, 2023
August 25, 2023
May 28, 2023
June 7, 2022
April 11, 2022
December 22, 2021