Binary Network

Binary networks, using only +1 and -1 for weights and activations, aim to drastically reduce the computational cost and memory footprint of deep neural networks while maintaining reasonable accuracy. Current research focuses on improving the accuracy of these networks through techniques like knowledge distillation, adaptive binarization methods, and novel architectures (e.g., BNext, BiHRNet), often incorporating elements of pruning and quantization to further enhance efficiency. This research is significant because it enables the deployment of deep learning models on resource-constrained devices, impacting areas like mobile computing, embedded systems, and on-device learning.

Papers