Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
ANDHRA Bandersnatch: Training Neural Networks to Predict Parallel Realities
Venkata Satya Sai Ajay Daliparthi
A spiking photonic neural network of 40.000 neurons, trained with rank-order coding for leveraging sparsity
Ria Talukder, Anas Skalli, Xavier Porte, Simon Thorpe, Daniel Brunner
GRU-PFG: Extract Inter-Stock Correlation from Stock Factors with Graph Neural Network
Yonggai Zhuang, Haoran Chen, Kequan Wang, Teng Fei
Boundary-Decoder network for inverse prediction of capacitor electrostatic analysis
Kart-Leong Lim, Rahul Dutta, Mihai Rotaru
One-Step Early Stopping Strategy using Neural Tangent Kernel Theory and Rademacher Complexity
Daniel Martin Xavier, Ludovic Chamoin, Jawher Jerray, Laurent Fribourg
FreqX: What neural networks learn is what network designers say
Zechen Liu
Enhancing Computer Vision with Knowledge: a Rummikub Case Study
Simon Vandevelde, Laurent Mertens, Sverre Lauwers, Joost Vennekens
ExpTest: Automating Learning Rate Searching and Tuning with Insights from Linearized Neural Networks
Zan Chaudhry, Naoko Mizuno
Fast training of large kernel models with delayed projections
Amirhesam Abedsoltan, Siyuan Ma, Parthe Pandit, Mikhail Belkin
Harnessing Superclasses for Learning from Hierarchical Databases
Nicolas Urbani (Heudiasyc), Sylvain Rousseau (Heudiasyc), Yves Grandvalet (Heudiasyc), Leonardo Tanzi (Polito)
DF-GNN: Dynamic Fusion Framework for Attention Graph Neural Networks on GPUs
Jiahui Liu, Zhenkun Cai, Zhiyong Chen, Minjie Wang
HiDP: Hierarchical DNN Partitioning for Distributed Inference on Heterogeneous Edge Platforms
Zain Taufique, Aman Vyas, Antonio Miele, Pasi Liljeberg, Anil Kanduri
AdamZ: An Enhanced Optimisation Method for Neural Network Training
Ilia Zaznov (Department of Computer Science, University of Reading, Reading, UK), Atta Badii (Department of Computer Science, University of Reading, Reading, UK), Alfonso Dufour (ICMA Centre, Henley Business School, University of Reading, Reading, UK), Julian Kunkel (Department of Computer Science/GWDG, University of Göttingen, Goettingen, Germany)
Towards Speaker Identification with Minimal Dataset and Constrained Resources using 1D-Convolution Neural Network
Irfan Nafiz Shahan, Pulok Ahmed Auvi
Comparative Study of Neural Network Methods for Solving Topological Solitons
Koji Hashimoto, Koshiro Matsuo, Masaki Murata, Gakuto Ogiwara
Proportional infinite-width infinite-depth limit for deep linear neural networks
Federico Bassetti, Lucia Ladelli, Pietro Rotondo
Analytic Continuation by Feature Learning
Zhe Zhao, Jingping Xu, Ce Wang, Yaping Yang