Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
Bayesian optimized deep ensemble for uncertainty quantification of deep neural networks: a system safety case study on sodium fast reactor thermal stratification modeling
Zaid Abulawi, Rui Hu, Prasanna Balaprakash, Yang Liu
Multi-perspective Alignment for Increasing Naturalness in Neural Machine Translation
Huiyuan Lai, Esther Ploeger, Rik van Noord, Antonio Toral
Evaluating Different Fault Injection Abstractions on the Assessment of DNN SW Hardening Strategies
Giuseppe Esposito, Juan David Guerrero-Balaguera, Josie Esteban Rodriguez Condia, Matteo Sonza Reorda
Edge-Splitting MLP: Node Classification on Homophilic and Heterophilic Graphs without Message Passing
Matthias Kohn, Marcel Hoffmann, Ansgar Scherp
Towards Precision in Bolted Joint Design: A Preliminary Machine Learning-Based Parameter Prediction
Ines Boujnah, Nehal Afifi, Andreas Wettstein, Sven Matthiesen
Neural Observation Field Guided Hybrid Optimization of Camera Placement
Yihan Cao, Jiazhao Zhang, Zhinan Yu, Kai Xu
GLL: A Differentiable Graph Learning Layer for Neural Networks
Jason Brown, Bohan Chen, Harris Hardiman-Mostow, Jeff Calder, Andrea L. Bertozzi
Understanding Gradient Descent through the Training Jacobian
Nora Belrose, Adam Scherlis
On How Iterative Magnitude Pruning Discovers Local Receptive Fields in Fully Connected Neural Networks
William T. Redman, Zhangyang Wang, Alessandro Ingrosso, Sebastian Goldt
Generative Densification: Learning to Densify Gaussians for High-Fidelity Generalizable 3D Reconstruction
Seungtae Nam, Xiangyu Sun, Gyeongjin Kang, Younggeun Lee, Seungjun Oh, Eunbyung Park
Adversarial Transferability in Deep Denoising Models: Theoretical Insights and Robustness Enhancement via Out-of-Distribution Typical Set Sampling
Jie Ning, Jiebao Sun, Shengzhu Shi, Zhichang Guo, Yao Li, Hongwei Li, Boying Wu
Accurate Multi-Category Student Performance Forecasting at Early Stages of Online Education Using Neural Networks
Naveed Ur Rehman Junejo, Muhammad Wasim Nawaz, Qingsheng Huang, Xiaoqing Dong, Chang Wang, Gengzhong Zheng
Depression detection from Social Media Bangla Text Using Recurrent Neural Networks
Sultan Ahmed, Salman Rakin, Mohammad Washeef Ibn Waliur, Nuzhat Binte Islam, Billal Hossain, Md. Mostofa Akbar
Partition of Unity Physics-Informed Neural Networks (POU-PINNs): An Unsupervised Framework for Physics-Informed Domain Decomposition and Mixtures of Experts
Arturo Rodriguez, Ashesh Chattopadhyay, Piyush Kumar, Luis F. Rodriguez, Vinod Kumar
Training neural networks without backpropagation using particles
Deepak Kumar
Neighborhood Commonality-aware Evolution Network for Continuous Generalized Category Discovery
Ye Wang, Yaxiong Wang, Guoshuai Zhao, Xueming Qian