Deep Neural Network Parameter

Deep neural network (DNN) parameters are the internal weights and biases that determine a DNN's behavior, and research focuses on optimizing their size, efficiency, and robustness. Current efforts concentrate on techniques like model pruning, quantization, and efficient training algorithms (e.g., AdamW with weight prediction), often applied to architectures such as ResNet and MobileNet, to reduce computational costs and improve performance. These advancements are crucial for deploying DNNs on resource-constrained devices (edge computing) and enhancing model security against adversarial attacks, ultimately impacting various fields from image recognition to power grid monitoring.

Papers