Paper ID: 2302.13696

Moderate Adaptive Linear Units (MoLU)

Hankyul Koh, Joon-hyuk Ko, Wonho Jhe

We propose a new high-performance activation function, Moderate Adaptive Linear Units (MoLU), for the deep neural network. The MoLU is a simple, beautiful and powerful activation function that can be a good main activation function among hundreds of activation functions. Because the MoLU is made up of the elementary functions, not only it is a diffeomorphism (i.e. analytic over whole domains), but also it reduces the training time.

Submitted: Feb 27, 2023