Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
Jump Diffusion-Informed Neural Networks with Transfer Learning for Accurate American Option Pricing under Data Scarcity
Qiguo Sun, Hanyue Huang, XiBei Yang, Yuwei Zhang
MALPOLON: A Framework for Deep Species Distribution Modeling
Theo Larcher, Lukas Picek, Benjamin Deneu, Titouan Lorieul, Maximilien Servajean, Alexis Joly
Dimension-independent learning rates for high-dimensional classification problems
Andres Felipe Lerma-Pineda, Philipp Petersen, Simon Frieder, Thomas Lukasiewicz
Similarity Learning with neural networks
Gabriel Sanfins, Fabio Ramos, Danilo Naiff
PGN: The RNN's New Successor is Effective for Long-Range Time Series Forecasting
Yuxin Jia, Youfang Lin, Jing Yu, Shuo Wang, Tianhao Liu, Huaiyu Wan
Benign or Not-Benign Overfitting in Token Selection of Attention Mechanism
Keitaro Sakamoto, Issei Sato
A novel application of Shapley values for large multidimensional time-series data: Applying explainable AI to a DNA profile classification neural network
Lauren Elborough, Duncan Taylor, Melissa Humphries
Deep Manifold Part 1: Anatomy of Neural Network Manifold
Max Y. Ma, Gen-Hua Shi
Non-asymptotic convergence analysis of the stochastic gradient Hamiltonian Monte Carlo algorithm with discontinuous stochastic gradient with applications to training of ReLU neural networks
Luxu Liang, Ariel Neufeld, Ying Zhang
BitQ: Tailoring Block Floating Point Precision for Improved DNN Efficiency on Resource-Constrained Devices
Yongqi Xu, Yujian Lee, Gao Yi, Bosheng Liu, Yucong Chen, Peng Liu, Jigang Wu, Xiaoming Chen, Yinhe Han
Locally Regularized Sparse Graph by Fast Proximal Gradient Descent
Dongfang Sun, Yingzhen Yang
CombU: A Combined Unit Activation for Fitting Mathematical Expressions with Neural Networks
Jiayu Li, Zilong Zhao, Kevin Yee, Uzair Javaid, Biplab Sikdar
Interpreting Deep Neural Network-Based Receiver Under Varying Signal-To-Noise Ratios
Marko Tuononen, Dani Korpi, Ville Hautamäki
Numerical Approximation Capacity of Neural Networks with Bounded Parameters: Do Limits Exist, and How Can They Be Measured?
Li Liu, Tengchao Yu, Heng Yong
Statistical tuning of artificial neural network
Mohamad Yamen AL Mohamad, Hossein Bevrani, Ali Akbar Haydari
Assessing Simplification Levels in Neural Networks: The Impact of Hyperparameter Configurations on Complexity and Sensitivity
(Joy)Huixin Guan
Identification For Control Based on Neural Networks: Approximately Linearizable Models
Maxime Thieffry, Alexandre Hache, Mohamed Yagoubi, Philippe Chevrel
Training Neural Networks for Modularity aids Interpretability
Satvik Golechha, Dylan Cope, Nandi Schoots
Data Poisoning-based Backdoor Attack Framework against Supervised Learning Rules of Spiking Neural Networks
Lingxin Jin, Meiyu Lin, Wei Jiang, Jinyu Zhan