Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
A Brain-Inspired Regularizer for Adversarial Robustness
Elie Attias, Cengiz Pehlevan, Dina Obeid
HyResPINNs: Adaptive Hybrid Residual Networks for Learning Optimal Combinations of Neural and RBF Components for Physics-Informed Modeling
Madison Cooley, Robert M. Kirby, Shandian Zhe, Varun Shankar
Fourier PINNs: From Strong Boundary Conditions to Adaptive Fourier Bases
Madison Cooley, Varun Shankar, Robert M. Kirby, Shandian Zhe
On the Hardness of Learning One Hidden Layer Neural Networks
Shuchen Li, Ilias Zadik, Manolis Zampetakis
Formation of Representations in Neural Networks
Liu Ziyin, Isaac Chuang, Tomer Galanti, Tomaso Poggio
DecTrain: Deciding When to Train a DNN Online
Zih-Sing Fu, Soumya Sudhakar, Sertac Karaman, Vivienne Sze
MANTRA: The Manifold Triangulations Assemblage
Rubén Ballester, Ernst Röell, Daniel Bin Schmid, Mathieu Alain, Sergio Escalera, Carles Casacuberta, Bastian Rieck
Simplicity bias and optimization threshold in two-layer ReLU networks
Etienne Boursier, Nicolas Flammarion
The Comparison of Individual Cat Recognition Using Neural Networks
Mingxuan Li, Kai Zhou
Robust Weight Initialization for Tanh Neural Networks with Fixed Point Analysis
Hyunwoo Lee, Hayoung Choi, Hyunju Kim
Towards Better Generalization: Weight Decay Induces Low-rank Bias for Neural Networks
Ke Chen, Chugang Yi, Haizhao Yang
Searching for Efficient Linear Layers over a Continuous Space of Structured Matrices
Andres Potapczynski, Shikai Qiu, Marc Finzi, Christopher Ferri, Zixi Chen, Micah Goldblum, Bayan Bruss, Christopher De Sa, Andrew Gordon Wilson
Composing Global Optimizers to Reasoning Tasks via Algebraic Objects in Neural Nets
Yuandong Tian
Towards Model Discovery Using Domain Decomposition and PINNs
Tirtho S. Saha, Alexander Heinlein, Cordula Reisch
Bayes' Power for Explaining In-Context Learning Generalizations
Samuel Müller, Noah Hollmann, Frank Hutter