Quantised Neural Network
Quantized neural networks (QNNs) aim to reduce the computational cost and memory footprint of deep neural networks by representing weights and activations using lower precision numbers (e.g., 2-bit integers). Current research focuses on developing effective training algorithms, such as variations of the straight-through estimator, and exploring suitable architectures like multi-layer perceptrons (MLPs), for various applications. This area is significant due to its potential for deploying deep learning models on resource-constrained devices, enabling applications like low-power intrusion detection systems in automotive networks and other embedded systems. The improved efficiency offered by QNNs is crucial for expanding the reach of AI to edge devices.