Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers - Page 11
Fourier-enhanced Neural Networks For Systems Biology Applications
Implicit Language Models are RNNs: Balancing Parallelization and Expressivity
What makes a good feedforward computational graph?
Microcanonical Langevin Ensembles: Advancing the Sampling of Bayesian Neural Networks
The impact of allocation strategies in subset learning on the expressive power of neural networks
Application of quantum machine learning using quantum kernel algorithms on multiclass neuron M type classification
Spectral-factorized Positive-definite Curvature Learning for NN Training
Study on Downlink CSI compression: Are Neural Networks the Only Solution?