Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
Residual Random Neural Networks
M. Andrecut
Prediction of Final Phosphorus Content of Steel in a Scrap-Based Electric Arc Furnace Using Artificial Neural Networks
Riadh Azzaz, Valentin Hurel, Patrice Menard, Mohammad Jahazi, Samira Ebrahimi Kahou, Elmira Moosavi-Khoonsari
Simmering: Sufficient is better than optimal for training neural networks
Irina Babayan, Hazhir Aliahmadi, Greg van Anders
AGENT-CQ: Automatic Generation and Evaluation of Clarifying Questions for Conversational Search with LLMs
Clemencia Siro, Yifei Yuan, Mohammad Aliannejadi, Maarten de Rijke
A distributional simplicity bias in the learning dynamics of transformers
Riccardo Rende, Federica Gerace, Alessandro Laio, Sebastian Goldt
Robotic Learning in your Backyard: A Neural Simulator from Open Source Components
Liyou Zhou, Oleg Sinavski, Athanasios Polydoros
Ensembling Finetuned Language Models for Text Classification
Sebastian Pineda Arango, Maciej Janowski, Lennart Purucker, Arber Zela, Frank Hutter, Josif Grabocka
Interpreting Neural Networks through Mahalanobis Distance
Alan Oursland
Initialization Matters: On the Benign Overfitting of Two-Layer ReLU CNN with Fully Trainable Layers
Shuning Shang, Xuran Meng, Yuan Cao, Difan Zou
Provable Tempered Overfitting of Minimal Nets and Typical Nets
Itamar Harel, William M. Hoza, Gal Vardi, Itay Evron, Nathan Srebro, Daniel Soudry
NIDS Neural Networks Using Sliding Time Window Data Processing with Trainable Activations and its Generalization Capability
Anton Raskovalov, Nikita Gabdullin, Ilya Androsov
Spatial-Temporal Search for Spiking Neural Networks
Kaiwei Che, Zhaokun Zhou, Li Yuan, Jianguo Zhang, Yonghong Tian, Luziwei Leng
Hamiltonian Matching for Symplectic Neural Integrators
Priscilla Canizares, Davide Murari, Carola-Bibiane Schönlieb, Ferdia Sherry, Zakhar Shumaylov
Lightweight Neural App Control
Filippos Christianos, Georgios Papoudakis, Thomas Coste, Jianye Hao, Jun Wang, Kun Shao
Escaping the Forest: Sparse Interpretable Neural Networks for Tabular Data
Salvatore Raieli, Abdulrahman Altahhan, Nathalie Jeanray, Stéphane Gerart, Sebastien Vachenc
Exploring structure diversity in atomic resolution microscopy with graph neural networks
Zheng Luo, Ming Feng, Zijian Gao, Jinyang Yu, Liang Hu, Tao Wang, Shenao Xue, Shen Zhou, Fangping Ouyang, Dawei Feng, Kele Xu, Shanshan Wang
Physics-informed Neural Networks for Functional Differential Equations: Cylindrical Approximation and Its Convergence Guarantees
Taiki Miyagawa, Takeru Yokota
Neuropsychology and Explainability of AI: A Distributional Approach to the Relationship Between Activation Similarity of Neural Categories in Synthetic Cognition
Michael Pichat, Enola Campoli, William Pogrund, Jourdan Wilson, Michael Veillet-Guillem, Anton Melkozerov, Paloma Pichat, Armanush Gasparian, Samuel Demarchi, Judicael Poumay
Learning Fair and Preferable Allocations through Neural Network
Ryota Maruo, Koh Takeuchi, Hisashi Kashima