Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers
FLOL: Fast Baselines for Real-World Low-Light Enhancement
Juan C. Benito, Daniel Feijoo, Alvaro Garcia, Marcos V. Conde
Reducing the Sensitivity of Neural Physics Simulators to Mesh Topology via Pretraining
Nathan Vaska, Justin Goodwin, Robin Walters, Rajmonda S. Caceres
PISCO: Self-Supervised k-Space Regularization for Improved Neural Implicit k-Space Representations of Dynamic MRI
Veronika Spieker, Hannah Eichhorn, Wenqi Huang, Jonathan K. Stelter, Tabita Catalan, Rickmer F. Braren, Daniel Rueckert, Francisco Sahli Costabal, Kerstin Hammernik, Dimitrios C. Karampinos, Claudia Prieto, Julia A. Schnabel
Mono-Forward: Backpropagation-Free Algorithm for Efficient Neural Network Training Harnessing Local Errors
James Gong, Bruce Li, Waleed Abdulla
Physics-informed neural networks for phase-resolved data assimilation and prediction of nonlinear ocean waves
Svenja Ehlers, Norbert Hoffmann, Tianning Tang, Adrian H. Callaghan, Rui Cao, Enrique M. Padilla, Yuxin Fang, Merten Stender
Decoding Interpretable Logic Rules from Neural Networks
Chuqin Geng, Xiaojie Xu, Zhaoyue Wang, Ziyu Zhao, Xujie Si
Globally Convergent Variational Inference
Declan McNamara, Jackson Loper, Jeffrey Regier
Training Hybrid Neural Networks with Multimode Optical Nonlinearities Using Digital Twins
Ilker Oguz, Louis J. E. Suter, Jih-Liang Hsieh, Mustafa Yildirim, Niyazi Ulas Dinc, Christophe Moser, Demetri Psaltis
Spiking Neural Network Accelerator Architecture for Differential-Time Representation using Learned Encoding
Daniel Windhager, Lothar Ratschbacher, Bernhard A. Moser, Michael Lunglmayr
Conformal mapping Coordinates Physics-Informed Neural Networks (CoCo-PINNs): learning neural networks for designing neutral inclusions
Daehee Cho, Hyeonmin Yun, Jaeyong Lee, Mikyoung Lim
Universal Training of Neural Networks to Achieve Bayes Optimal Classification Accuracy
Mohammadreza Tavasoli Naeini, Ali Bereyhi, Morteza Noshad, Ben Liang, Alfred O. Hero III
LLM360 K2: Scaling Up 360-Open-Source Large Language Models
Zhengzhong Liu, Bowen Tan, Hongyi Wang, Willie Neiswanger, Tianhua Tao, Haonan Li, Fajri Koto, Yuqi Wang, Suqi Sun, Omkar Pangarkar, Richard Fan, Yi Gu, Victor Miller, Liqun Ma, Liping Tang, Nikhil Ranjan, Yonghao Zhuang, Guowei He, Renxi Wang, Mingkai Deng, Robin Algayres, Yuanzhi Li, Zhiqiang Shen, Preslav Nakov, Eric Xing
Improving the adaptive and continuous learning capabilities of artificial neural networks: Lessons from multi-neuromodulatory dynamics
Jie Mei, Alejandro Rodriguez-Garcia, Daigo Takeuchi, Gabriel Wainstein, Nina Hubig, Yalda Mohsenzadeh, Srikanth Ramaswamy
Application of Vision-Language Model to Pedestrians Behavior and Scene Understanding in Autonomous Driving
Haoxiang Gao, Yu Zhao
Kolmogorov-Arnold networks for metal surface defect classification
Maciej Krzywda, Mariusz Wermiński, Szymon Łukasik, Amir H. Gandomi
Tensorization of neural networks for improved privacy and interpretability
José Ramón Pareja Monturiol, Alejandro Pozas-Kerstjens, David Pérez-García
LensNet: Enhancing Real-time Microlensing Event Discovery with Recurrent Neural Networks in the Korea Microlensing Telescope Network
Javier Viaña, Kyu-Ha Hwang, Zoë de Beurs, Jennifer C. Yee, Andrew Vanderburg, Michael D. Albrow, Sun-Ju Chung, Andrew Gould, Cheongho Han, Youn Kil Jung, Yoon-Hyun Ryu, In-Gu Shin, Yossi Shvartzvald, Hongjing Yang, Weicheng Zang, Sang-Mok Cha, Dong-Jin Kim, Seung-Lee Kim, Chung-Uk Lee, Dong-Joo Lee, Yongseok Lee, Byeong-Gon Park, Richard W. Pogge
Emergent Symbol-like Number Variables in Artificial Neural Networks
Satchel Grant, Noah D. Goodman, James L. McClelland