Normalization Dictionary
Normalization techniques, applied to data or model parameters, aim to improve the performance and stability of machine learning models by addressing issues like scale discrepancies, feature imbalances, and overfitting. Current research focuses on adapting normalization methods for various applications, including natural language processing (e.g., text normalization for historical documents and multilingual ASR), computer vision (e.g., image harmonization and binarization), and reinforcement learning, often employing transformer-based models, GANs, and specialized normalization layers (e.g., layer normalization, spectral batch normalization). These advancements are significant because effective normalization strategies are crucial for enhancing model generalization, mitigating biases, and improving efficiency across diverse machine learning tasks.
Papers
Vanishing Component Analysis with Contrastive Normalization
Ryosuke Masuya, Yuichi Ike, Hiroshi Kera
The Effect of Normalization for Bi-directional Amharic-English Neural Machine Translation
Tadesse Destaw Belay, Atnafu Lambebo Tonja, Olga Kolesnikova, Seid Muhie Yimam, Abinew Ali Ayele, Silesh Bogale Haile, Grigori Sidorov, Alexander Gelbukh