Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
Tilting the Odds at the Lottery: the Interplay of Overparameterisation and Curricula in Neural Networks
Stefano Sarao Mannelli, Yaraslau Ivashynka, Andrew Saxe, Luca Saglietti
nn2poly: An R Package for Converting Neural Networks into Interpretable Polynomials
Pablo Morala, Jenny Alexandra Cifuentes, Rosa E. Lillo, Iñaki Ucar
Neural network learns low-dimensional polynomials with SGD near the information-theoretic limit
Jason D. Lee, Kazusato Oko, Taiji Suzuki, Denny Wu
Iteration over event space in time-to-first-spike spiking neural networks for Twitter bot classification
Mateusz Pabian, Dominik Rzepka, Mirosław Pawlak
Physics-Informed Neural Networks for Dynamic Process Operations with Limited Physical Knowledge and Data
Mehmet Velioglu, Song Zhai, Sophia Rupprecht, Alexander Mitsos, Andreas Jupke, Manuel Dahmen
Predicting the fatigue life of asphalt concrete using neural networks
Jakub Houlík, Jan Valentin, Václav Nežerka
An efficient Wasserstein-distance approach for reconstructing jump-diffusion processes using parameterized neural networks
Mingtao Xia, Xiangting Li, Qijing Shen, Tom Chou
Hardness of Learning Neural Networks under the Manifold Hypothesis
Bobak T. Kiani, Jason Wang, Melanie Weber
Evidence of Learned Look-Ahead in a Chess-Playing Neural Network
Erik Jenner, Shreyas Kapur, Vasil Georgiev, Cameron Allen, Scott Emmons, Stuart Russell
Amalgam: A Framework for Obfuscated Neural Network Training on the Cloud
Sifat Ut Taki, Spyridon Mastorakis
Gated recurrent neural network with TPE Bayesian optimization for enhancing stock index prediction accuracy
Bivas Dinda
Empirical influence functions to understand the logic of fine-tuning
Jordan K. Matelsky, Lyle Ungar, Konrad P. Kording
Activation-Descent Regularization for Input Optimization of ReLU Networks
Hongzhan Yu, Sicun Gao
Stochastic Restarting to Overcome Overfitting in Neural Networks with Noisy Labels
Youngkyoung Bae, Yeongwoo Song, Hawoong Jeong
Real-Time State Modulation and Acquisition Circuit in Neuromorphic Memristive Systems
Shengbo Wang, Cong Li, Tongming Pu, Jian Zhang, Weihao Ma, Luigi Occhipinti, Arokia Nathan, Shuo Gao