Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
Additive regularization schedule for neural architecture search
Mark Potanin, Kirill Vayser, Vadim Strijov
Ensuring Both Positivity and Stability Using Sector-Bounded Nonlinearity for Systems with Neural Network Controllers
Hamidreza Montazeri Hedesh, Milad Siami
Evolutionary Spiking Neural Networks: A Survey
Shuaijie Shen, Rui Zhang, Chao Wang, Renzhuo Huang, Aiersi Tuerhong, Qinghai Guo, Zhichao Lu, Jianguo Zhang, Luziwei Leng
FullCert: Deterministic End-to-End Certification for Training and Inference of Neural Networks
Tobias Lorenz, Marta Kwiatkowska, Mario Fritz
Just How Flexible are Neural Networks in Practice?
Ravid Shwartz-Ziv, Micah Goldblum, Arpit Bansal, C. Bayan Bruss, Yann LeCun, Andrew Gordon Wilson
Analysing the Behaviour of Tree-Based Neural Networks in Regression Tasks
Peter Samoaa, Mehrdad Farahani, Antonio Longa, Philipp Leitner, Morteza Haghir Chehreghani
How Neural Networks Learn the Support is an Implicit Regularization Effect of SGD
Pierfrancesco Beneventano, Andrea Pinto, Tomaso Poggio
Kolmogorov Arnold Informed neural network: A physics-informed deep learning framework for solving forward and inverse problems based on Kolmogorov Arnold Networks
Yizheng Wang, Jia Sun, Jinshuai Bai, Cosmin Anitescu, Mohammad Sadegh Eshaghi, Xiaoying Zhuang, Timon Rabczuk, Yinghua Liu
Latent Communication in Artificial Neural Networks
Luca Moschella
Calibrating Neural Networks' parameters through Optimal Contraction in a Prediction Problem
Valdes Gonzalo
Robust Image Classification in the Presence of Out-of-Distribution and Adversarial Samples Using Attractors in Neural Networks
Nasrin Alipour, Seyyed Ali SeyyedSalehi
Grad-Instructor: Universal Backpropagation with Explainable Evaluation Neural Networks for Meta-learning and AutoML
Ryohei Ino
Automated Design of Linear Bounding Functions for Sigmoidal Nonlinearities in Neural Networks
Matthias König, Xiyue Zhang, Holger H. Hoos, Marta Kwiatkowska, Jan N. van Rijn
Over-parameterization and Adversarial Robustness in Neural Networks: An Overview and Empirical Analysis
Zhang Chen, Luca Demetrio, Srishti Gupta, Xiaoyi Feng, Zhaoqiang Xia, Antonio Emanuele Cinà, Maura Pintor, Luca Oneto, Ambra Demontis, Battista Biggio, Fabio Roli
An elementary proof of a universal approximation theorem
Chris Monico
Rule Based Learning with Dynamic (Graph) Neural Networks
Florian Seiffarth
Evolving Self-Assembling Neural Networks: From Spontaneous Activity to Experience-Dependent Learning
Erwan Plantec, Joachin W. Pedersen, Milton L. Montero, Eleni Nisioti, Sebastian Risi
Towards Efficient Pareto Set Approximation via Mixture of Experts Based Model Fusion
Anke Tang, Li Shen, Yong Luo, Shiwei Liu, Han Hu, Bo Du
An Efficient Approach to Regression Problems with Tensor Neural Networks
Yongxin Li, Yifan Wang, Zhongshuo Lin, Hehu Xie