Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
Learning rigid-body simulators over implicit shapes for large-scale scenes and vision
Yulia Rubanova, Tatiana Lopez-Guevara, Kelsey R. Allen, William F. Whitney, Kimberly Stachenfeld, Tobias Pfaff
ReCycle: Resilient Training of Large DNNs using Pipeline Adaptation
Swapnil Gandhi, Mark Zhao, Athinagoras Skiadopoulos, Christos Kozyrakis
EchoSpike Predictive Plasticity: An Online Local Learning Rule for Spiking Neural Networks
Lars Graf, Zhe Su, Giacomo Indiveri
Visual Analysis of Prediction Uncertainty in Neural Networks for Deep Image Synthesis
Soumya Dutta, Faheem Nizar, Ahmad Amaan, Ayan Acharya
DeepNcode: Encoding-Based Protection against Bit-Flip Attacks on Neural Networks
Patrik Velčický, Jakub Breier, Mladen Kovačević, Xiaolu Hou
Counterfactual Gradients-based Quantification of Prediction Trust in Neural Networks
Mohit Prabhushankar, Ghassan AlRegib
Interpolation with deep neural networks with non-polynomial activations: necessary and sufficient numbers of neurons
Liam Madden
Bond Graphs for multi-physics informed Neural Networks for multi-variate time series
Alexis-Raja Brachet, Pierre-Yves Richard, Céline Hudelot
A theory of neural emulators
Catalin C. Mitelut
NFCL: Simply interpretable neural networks for a short-term multivariate forecasting
Wonkeun Jo, Dongil Kim
Interactive Simulations of Backdoors in Neural Networks
Peter Bajcsy, Maxime Bros
Graph neural networks informed locally by thermodynamics
Alicia Tierz, Iciar Alfaro, David González, Francisco Chinesta, Elías Cueto
FFCL: Forward-Forward Net with Cortical Loops, Training and Inference on Edge Without Backpropagation
Ali Karkehabadi, Houman Homayoun, Avesta Sasan
The Local Interaction Basis: Identifying Computationally-Relevant and Sparsely Interacting Features in Neural Networks
Lucius Bushnaq, Stefan Heimersheim, Nicholas Goldowsky-Dill, Dan Braun, Jake Mendel, Kaarel Hänni, Avery Griffin, Jörn Stöhler, Magdalena Wache, Marius Hobbhahn
Using Degeneracy in the Loss Landscape for Mechanistic Interpretability
Lucius Bushnaq, Jake Mendel, Stefan Heimersheim, Dan Braun, Nicholas Goldowsky-Dill, Kaarel Hänni, Cindy Wu, Marius Hobbhahn
Reduced storage direct tensor ring decomposition for convolutional neural networks compression
Mateusz Gabor, Rafał Zdunek