Noise Based

Noise, inherent in data and hardware, significantly impacts the performance and reliability of machine learning models. Current research focuses on developing robust algorithms and architectures that mitigate the effects of noise, including techniques like explainable regularizations for analog neural networks, noise-aware federated learning strategies, and non-convex loss functions for gradient boosting. These advancements aim to improve model accuracy, generalization, and reproducibility across diverse applications, from healthcare and finance to autonomous systems, by addressing various noise sources and their interactions with data characteristics. The ultimate goal is to build more reliable and trustworthy machine learning systems.

Papers