Machine Learning Component
Machine learning (ML) components are increasingly integrated into complex systems, demanding rigorous assurance of their safety and reliability. Current research focuses on developing methods for monitoring ML component performance at runtime, particularly in situations lacking ground truth, and on creating modular, hierarchical architectures to improve adaptability and maintainability. This work addresses challenges in testing, verification, and ensuring the quality of ML components, aiming to bridge the gap between data science practices and established software engineering principles for safety-critical applications. The ultimate goal is to establish robust frameworks and tools for building trustworthy ML-enabled systems across various domains.