Neural Network
Neural networks are computational models inspired by the structure and function of the brain, primarily aimed at approximating complex functions and solving diverse problems through learning from data. Current research emphasizes improving efficiency and robustness, exploring novel architectures like sinusoidal neural fields and hybrid models combining neural networks with radial basis functions, as well as developing methods for understanding and manipulating the internal representations learned by these networks, such as through hyper-representations of network weights. These advancements are driving progress in various fields, including computer vision, natural language processing, and scientific modeling, by enabling more accurate, efficient, and interpretable AI systems.
Papers - Page 34
FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression
Towards Arbitrary QUBO Optimization: Analysis of Classical and Quantum-Activated Feedforward Neural Networks
Loss Landscape Characterization of Neural Networks without Over-Parametrziation
Stable Diffusion with Continuous-time Neural Network
LPUF-AuthNet: A Lightweight PUF-Based IoT Authentication via Tandem Neural Networks and Split Learning
DiffGAN: A Test Generation Approach for Differential Testing of Deep Neural Networks
G-Designer: Architecting Multi-agent Communication Topologies via Graph Neural Networks
DORNet: A Degradation Oriented and Regularized Network for Blind Depth Super-Resolution
Are High-Degree Representations Really Unnecessary in Equivariant Graph Neural Networks?
Error Diffusion: Post Training Quantization with Block-Scaled Number Formats for Neural Networks