Shallow Network
Shallow neural networks, characterized by a single hidden layer, are a focus of ongoing research aiming to understand their limitations and potential advantages compared to deeper architectures. Current research explores their use in various applications, including function approximation, image restoration, and operator learning, often employing techniques like random projections and novel optimization strategies to improve performance and efficiency. This renewed interest stems from the desire for computationally efficient and interpretable models, as well as a deeper theoretical understanding of their approximation capabilities and implicit biases, particularly in high-dimensional settings.
Papers
February 8, 2022
January 10, 2022
December 20, 2021
December 10, 2021
December 1, 2021