Deep Random Neural Network
Deep random neural networks, characterized by randomly initialized weights, are studied to understand the fundamental properties of neural networks and improve their training and performance. Current research focuses on analyzing the behavior of these networks in the infinite-width limit, exploring connections to Gaussian processes and Gaussian mixtures, and examining the impact of different weight distributions (e.g., sparse, low-rank) and normalization techniques (e.g., batch normalization) on their properties. This research provides valuable insights into the generalization ability and training dynamics of neural networks, potentially leading to more efficient training algorithms and improved model architectures for various applications, including intrusion detection and privacy-preserving data sharing.