Noisy Embeddings
Noisy embeddings involve intentionally adding noise to data representations during model training or inference to improve robustness and generalization. Current research focuses on applying this technique to enhance various machine learning tasks, including speech recognition (using generative error correction and self-supervised learning), and language model fine-tuning (employing methods like symmetric noise injection and knowledge distillation). These advancements aim to create more resilient and accurate models, particularly in challenging real-world scenarios with noisy or incomplete data, impacting fields like speech processing and natural language processing.
Papers
September 3, 2024
January 19, 2024
December 3, 2023
October 9, 2023
February 28, 2023
April 24, 2022