Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
Using Low-Discrepancy Points for Data Compression in Machine Learning: An Experimental Comparison
Simone Göttlich, Jacob Heieck, Andreas Neuenkirch
Explaining Spectrograms in Machine Learning: A Study on Neural Networks for Speech Classification
Jesin James, Balamurali B. T., Binu Abeysinghe, Junchen Liu
INSIGHT: Universal Neural Simulator for Analog Circuits Harnessing Autoregressive Transformers
Souradip Poddar, Youngmin Oh, Yao Lai, Hanqing Zhu, Bosun Hwang, David Z. Pan
Characterization of topological structures in different neural network architectures
Paweł Świder
Efficiently Training Neural Networks for Imperfect Information Games by Sampling Information Sets
Timo Bertram, Johannes Fürnkranz, Martin Müller
Structural Generalization in Autonomous Cyber Incident Response with Message-Passing Neural Networks and Reinforcement Learning
Jakob Nyberg, Pontus Johnson
Revealing the Utilized Rank of Subspaces of Learning in Neural Networks
Isha Garg, Christian Koguchi, Eshan Verma, Daniel Ulbricht
Randomized Physics-Informed Neural Networks for Bayesian Data Assimilation
Yifei Zong, David Barajas-Solano, Alexandre M. Tartakovsky
Testing learning hypotheses using neural networks by manipulating learning data
Cara Su-Yi Leong, Tal Linzen
Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling
Alejandro Rodriguez-Garcia, Jie Mei, Srikanth Ramaswamy
G-Adaptive mesh refinement -- leveraging graph neural networks and differentiable finite element solvers
James Rowbottom, Georg Maierhofer, Teo Deveney, Katharina Schratz, Pietro Liò, Carola-Bibiane Schönlieb, Chris Budd
LayerShuffle: Enhancing Robustness in Vision Transformers by Randomizing Layer Execution Order
Matthias Freiberger, Peter Kun, Anders Sundnes Løvlie, Sebastian Risi
Exploiting the equivalence between quantum neural networks and perceptrons
Chris Mingard, Jessica Pointing, Charles London, Yoonsoo Nam, Ard A. Louis