Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
Guaranteeing Conservation Laws with Projection in Physics-Informed Neural Networks
Anthony Baez, Wang Zhang, Ziwen Ma, Subhro Das, Lam M. Nguyen, Luca Daniel
Neuroevolution Neural Architecture Search for Evolving RNNs in Stock Return Prediction and Portfolio Trading
Zimeng Lyu, Amulya Saxena, Rohaan Nadeem, Hao Zhang, Travis Desell
Dynamic User Grouping based on Location and Heading in 5G NR Systems
Dino Pjanić, Korkut Emre Arslantürk, Xuesong Cai, Fredrik Tufvesson
Rethinking generalization of classifiers in separable classes scenarios and over-parameterized regimes
Julius Martinetz, Christoph Linse, Thomas Martinetz
LLM-Assisted Red Teaming of Diffusion Models through "Failures Are Fated, But Can Be Faded"
Som Sagar, Aditya Taparia, Ransalu Senanayake
Gradient-Free Supervised Learning using Spike-Timing-Dependent Plasticity for Image Recognition
Wei Xie
Efficient Neural Network Training via Subset Pretraining
Jan Spörer, Bernhard Bermeitinger, Tomas Hrycej, Niklas Limacher, Siegfried Handschuh
Identifying Sub-networks in Neural Networks via Functionally Similar Representations
Tian Gao, Amit Dhurandhar, Karthikeyan Natesan Ramamurthy, Dennis Wei
Fair Bilevel Neural Network (FairBiNN): On Balancing fairness and accuracy via Stackelberg Equilibrium
Mehdi Yazdani-Jahromi, Ali Khodabandeh Yalabadi, AmirArsalan Rajabi, Aida Tayebi, Ivan Garibay, Ozlem Ozmen Garibay
Theoretical Limitations of Ensembles in the Age of Overparameterization
Niclas Dern, John P. Cunningham, Geoff Pleiss
Learning How to Vote With Principles: Axiomatic Insights Into the Collective Decisions of Neural Networks
Levin Hornischer, Zoi Terzopoulou
Metric as Transform: Exploring beyond Affine Transform for Interpretable Neural Network
Suman Sapkota
Small Contributions, Small Networks: Efficient Neural Network Pruning Based on Relative Importance
Mostafa Hussien, Mahmoud Afifi, Kim Khoa Nguyen, Mohamed Cheriet
Increasing Interpretability of Neural Networks By Approximating Human Visual Saliency
Aidan Boyd, Mohamed Trabelsi, Huseyin Uzunalioglu, Dan Kushnir
Karush-Kuhn-Tucker Condition-Trained Neural Networks (KKT Nets)
Shreya Arvind, Rishabh Pomaje, Rajshekhar V Bhat
On The Global Convergence Of Online RLHF With Neural Parametrization
Mudit Gaur, Amrit Singh Bedi, Raghu Pasupathy, Vaneet Aggarwal
All You Need is an Improving Column: Enhancing Column Generation for Parallel Machine Scheduling via Transformers
Amira Hijazi, Osman Ozaltin, Reha Uzsoy
Integrating Symbolic Neural Networks with Building Physics: A Study and Proposal
Xia Chen, Guoquan Lv, Xinwei Zhuang, Carlos Duarte, Stefano Schiavon, Philipp Geyer
SNAP: Stopping Catastrophic Forgetting in Hebbian Learning with Sigmoidal Neuronal Adaptive Plasticity
Tianyi Xu, Patrick Zheng, Shiyan Liu, Sicheng Lyu, Isabeau Prémont-Schwarz
Fractional-order spike-timing-dependent gradient descent for multi-layer spiking neural networks
Yi Yang, Richard M. Voyles, Haiyan H. Zhang, Robert A. Nawrocki