Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
Quantized neural network for complex hologram generation
Yutaka Endo, Minoru Oikawa, Timothy D. Wilkinson, Tomoyoshi Shimobaba, Tomoyoshi Ito
Batch-FPM: Random batch-update multi-parameter physical Fourier ptychography neural network
Ruiqing Sun, Delong Yang, Yiyan Su, Shaohui Zhang, Qun Hao
Lecture Notes on Linear Neural Networks: A Tale of Optimization and Generalization in Deep Learning
Nadav Cohen, Noam Razin
Explainable Convolutional Networks for Crater Detection and Lunar Landing Navigation
Jianing Song, Nabil Aouf, Duarte Rondao, Christophe Honvault, Luis Mansilla
MPruner: Optimizing Neural Network Size with CKA-Based Mutual Information Pruning
Seungbeom Hu, ChanJun Park, Andrew Ferraiuolo, Sang-Ki Ko, Jinwoo Kim, Haein Song, Jieung Kim
Physics-Informed Neural Network for Concrete Manufacturing Process Optimization
Sam Varghese, Rahul Anand, Gaurav Paliwal
Applying graph neural network to SupplyGraph for supply chain network
Kihwan Han
N-DriverMotion: Driver motion learning and prediction using an event-based camera and directly trained spiking neural networks
Hyo Jong Chung, Byungkon Kang, Yoonseok Yang
JacNet: Learning Functions with Structured Jacobians
Jonathan Lorraine, Safwan Hossain
Verification of Geometric Robustness of Neural Networks via Piecewise Linear Approximation and Lipschitz Optimisation
Ben Batten, Yang Zheng, Alessandro De Palma, Panagiotis Kouvaros, Alessio Lomuscio
Hierarchical Attention and Parallel Filter Fusion Network for Multi-Source Data Classification
Han Luo, Feng Gao, Junyu Dong, Lin Qi
From Radiologist Report to Image Label: Assessing Latent Dirichlet Allocation in Training Neural Networks for Orthopedic Radiograph Classification
Jakub Olczak, Max Gordon
Multilevel Interpretability Of Artificial Neural Networks: Leveraging Framework And Methods From Neuroscience
Zhonghao He, Jascha Achterberg, Katie Collins, Kevin Nejad, Danyal Akarca, Yinzhu Yang, Wes Gurnee, Ilia Sucholutsky, Yuhan Tang, Rebeca Ianov, George Ogden, Chole Li, Kai Sandbrink, Stephen Casper, Anna Ivanova, Grace W. Lindsay
Pruning By Explaining Revisited: Optimizing Attribution Methods to Prune CNNs and Transformers
Sayed Mohammad Vakilzadeh Hatefi, Maximilian Dreyer, Reduan Achtibat, Thomas Wiegand, Wojciech Samek, Sebastian Lapuschkin
Deep Learning Improvements for Sparse Spatial Field Reconstruction
Robert Sunderhaft, Logan Frank, Jim Davis
Advanced atom-level representations for protein flexibility prediction utilizing graph neural networks
Sina Sarparast, Aldo Zaimi, Maximilian Ebert, Michael-Rock Goldsmith
Neural-ANOVA: Model Decomposition for Interpretable Machine Learning
Steffen Limmer, Steffen Udluft, Clemens Otte