Noise Robustness
Noise robustness in machine learning focuses on developing models and algorithms that maintain accuracy and performance despite noisy or corrupted data, a crucial challenge for real-world applications. Current research emphasizes techniques like explainable regularization, weak formulations of learning problems, and adaptive adversarial training, often applied within neural network architectures such as transformers and convolutional neural networks. These advancements are vital for improving the reliability and generalizability of machine learning models across diverse domains, including speech recognition, image processing, and other areas where noisy data is prevalent. The ultimate goal is to create more resilient and dependable AI systems capable of functioning effectively in complex, unpredictable environments.
Papers
Enhancing Blood Flow Assessment in Diffuse Correlation Spectroscopy: A Transfer Learning Approach with Noise Robustness Analysis
Xi Chen, Xingda Li
Noise-robust zero-shot text-to-speech synthesis conditioned on self-supervised speech-representation model with adapters
Kenichi Fujita, Hiroshi Sato, Takanori Ashihara, Hiroki Kanagawa, Marc Delcroix, Takafumi Moriya, Yusuke Ijima