Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
Characterization of topological structures in different neural network architectures
Paweł Świder
Efficiently Training Neural Networks for Imperfect Information Games by Sampling Information Sets
Timo Bertram, Johannes Fürnkranz, Martin Müller
Structural Generalization in Autonomous Cyber Incident Response with Message-Passing Neural Networks and Reinforcement Learning
Jakob Nyberg, Pontus Johnson
Revealing the Utilized Rank of Subspaces of Learning in Neural Networks
Isha Garg, Christian Koguchi, Eshan Verma, Daniel Ulbricht
Randomized Physics-Informed Neural Networks for Bayesian Data Assimilation
Yifei Zong, David Barajas-Solano, Alexandre M. Tartakovsky
Testing learning hypotheses using neural networks by manipulating learning data
Cara Su-Yi Leong, Tal Linzen
Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling
Alejandro Rodriguez-Garcia, Jie Mei, Srikanth Ramaswamy
G-Adaptive mesh refinement -- leveraging graph neural networks and differentiable finite element solvers
James Rowbottom, Georg Maierhofer, Teo Deveney, Katharina Schratz, Pietro Liò, Carola-Bibiane Schönlieb, Chris Budd
LayerShuffle: Enhancing Robustness in Vision Transformers by Randomizing Layer Execution Order
Matthias Freiberger, Peter Kun, Anders Sundnes Løvlie, Sebastian Risi
Exploiting the equivalence between quantum neural networks and perceptrons
Chris Mingard, Jessica Pointing, Charles London, Yoonsoo Nam, Ard A. Louis
PaSE: Parallelization Strategies for Efficient DNN Training
Venmugil Elango
Reduced-Order Neural Operators: Learning Lagrangian Dynamics on Highly Sparse Graphs
Hrishikesh Viswanath, Yue Chang, Julius Berner, Peter Yichen Chen, Aniket Bera
Psychology of Artificial Intelligence: Epistemological Markers of the Cognitive Analysis of Neural Networks
Michael Pichat
Implicit Hypersurface Approximation Capacity in Deep ReLU Networks
Jonatan Vallin, Karl Larsson, Mats G. Larson
Bias of Stochastic Gradient Descent or the Architecture: Disentangling the Effects of Overparameterization of Neural Networks
Amit Peleg, Matthias Hein