Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers - Page 27
Training Neural Networks as Recognizers of Formal Languages
Randomized Forward Mode Gradient for Spiking Neural Networks in Scientific Machine Learning
Generative Feature Training of Thin 2-Layer Networks
Evolving Efficient Genetic Encoding for Deep Spiking Neural Networks
MP-PINN: A Multi-Phase Physics-Informed Neural Network for Epidemic Forecasting
A Text Classification Model Combining Adversarial Training with Pre-trained Language Model and neural networks: A Case Study on Telecom Fraud Incident Texts
Precision Glass Thermoforming Assisted by Neural Networks
On the Principles of ReLU Networks with One Hidden Layer