Ridgelet Transform
The ridgelet transform is a mathematical tool used to analyze the parameter distributions within neural networks, providing insights into how these networks represent functions and learn. Current research focuses on extending the ridgelet transform's applicability to diverse network architectures, including fully-connected networks, group convolutional networks, and those operating on non-Euclidean spaces, aiming for unified universality theorems across shallow and deep models. This work offers a deeper understanding of neural network function approximation, potentially leading to improved network design, parameter initialization strategies, and more efficient training algorithms. Furthermore, quantum implementations of the ridgelet transform are being explored to accelerate learning tasks.