Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
Predicting ptychography probe positions using single-shot phase retrieval neural network
Ming Du, Tao Zhou, Junjing Deng, Daniel J. Ching, Steven Henke, Mathew J. Cherukara
Solving partial differential equations with sampled neural networks
Chinmay Datar, Taniya Kapoor, Abhishek Chandra, Qing Sun, Iryna Burak, Erik Lien Bolager, Anna Veselovska, Massimo Fornasier, Felix Dietrich
Advancing Financial Risk Prediction Through Optimized LSTM Model Performance and Comparative Analysis
Ke Xu, Yu Cheng, Shiqing Long, Junjie Guo, Jue Xiao, Mengfang Sun
Optimizing cnn-Bigru performance: Mish activation and comparative analysis with Relu
Asmaa Benchama, Khalid Zebbara
Recurrent neural network wave functions for Rydberg atom arrays on kagome lattice
Mohamed Hibat-Allah, Ejaaz Merali, Giacomo Torlai, Roger G Melko, Juan Carrasquilla
Flexible SE(2) graph neural networks with applications to PDE surrogates
Maria Bånkestad, Olof Mogren, Aleksis Pirinen
Tropical Expressivity of Neural Networks
Paul Lezeau, Thomas Walker, Yueqi Cao, Shiv Bhatia, Anthea Monod
Symmetries in Overparametrized Neural Networks: A Mean-Field View
Javier Maass, Joaquin Fontbona
Semantic Landmark Detection & Classification Using Neural Networks For 3D In-Air Sonar
Wouter Jansen, Jan Steckel
Understanding and Minimising Outlier Features in Neural Network Training
Bobby He, Lorenzo Noci, Daniele Paliotta, Imanol Schlag, Thomas Hofmann
Convex neural network synthesis for robustness in the 1-norm
Ross Drummond, Chris Guiver, Matthew C. Turner
Few-Shot Testing: Estimating Uncertainty of Memristive Deep Neural Networks Using One Bayesian Test Vector
Soyed Tuhin Ahmed, Mehdi Tahoori
Learning Mixture-of-Experts for General-Purpose Black-Box Discrete Optimization
Shengcai Liu, Zhiyuan Wang, Yew-Soon Ong, Xin Yao, Ke Tang
Semiring Activation in Neural Networks
Bart M. N. Smets, Peter D. Donker, Jim W. Portegies, Remco Duits
Scalable Surrogate Verification of Image-based Neural Network Control Systems using Composition and Unrolling
Feiyang Cai, Chuchu Fan, Stanley Bak
SGD method for entropy error function with smoothing l0 regularization for neural networks
Trong-Tuan Nguyen, Van-Dat Thang, Nguyen Van Thin, Phuong T. Nguyen
Classifying Overlapping Gaussian Mixtures in High Dimensions: From Optimal Classifiers to Neural Nets
Khen Cohen, Noam Levi, Yaron Oz
Self-Supervised Dual Contouring
Ramana Sundararaman, Roman Klokov, Maks Ovsjanikov
$C^2M^3$: Cycle-Consistent Multi-Model Merging
Donato Crisostomi, Marco Fumero, Daniele Baieri, Florian Bernard, Emanuele Rodolà