End to End Training
End-to-end training optimizes entire machine learning models simultaneously, aiming to improve performance and efficiency compared to modular approaches. Current research focuses on addressing limitations such as high computational cost and memory usage through techniques like parameter reduction, innovative training architectures (e.g., incorporating attention mechanisms, iterative neural networks), and improved data preprocessing. This approach is impacting diverse fields, from autonomous driving and semantic communication to medical image analysis and radar signal processing, by enabling the development of more powerful and efficient AI systems.
Papers
Automatic evaluation of herding behavior in towed fishing gear using end-to-end training of CNN and attention-based networks
Orri Steinn Guðfinnsson, Týr Vilhjálmsson, Martin Eineborg, Torfi Thorhallsson
LEAPT: Learning Adaptive Prefix-to-prefix Translation For Simultaneous Machine Translation
Lei Lin, Shuangtao Li, Xiaodong Shi