Plain Neural

Plain neural networks, lacking the shortcut connections of architectures like ResNets, are experiencing renewed interest in research. Current efforts focus on understanding and mitigating the "dissipating inputs" phenomenon that hinders their performance in deep architectures, exploring novel training strategies and architectural modifications to improve their scalability and accuracy. This research aims to leverage the potential advantages of simpler, more efficient networks, such as reduced computational cost and improved parameter efficiency, while achieving performance comparable to more complex models in various applications including image anomaly detection, image super-resolution, and 3D object tracking. The success of these efforts could significantly impact the efficiency and accessibility of deep learning across diverse fields.

Papers