Paper ID: 2201.02571
Neural Network Optimization for Reinforcement Learning Tasks Using Sparse Computations
Dmitry Ivanov, Mikhail Kiselev, Denis Larionov
This article proposes a sparse computation-based method for optimizing neural networks for reinforcement learning (RL) tasks. This method combines two ideas: neural network pruning and taking into account input data correlations; it makes it possible to update neuron states only when changes in them exceed a certain threshold. It significantly reduces the number of multiplications when running neural networks. We tested different RL tasks and achieved 20-150x reduction in the number of multiplications. There were no substantial performance losses; sometimes the performance even improved.
Submitted: Jan 7, 2022