Model Quality
Model quality assessment is crucial for ensuring the reliability and effectiveness of machine learning systems across diverse applications. Current research emphasizes moving beyond simplistic metrics like the F1 score to incorporate cost-sensitive evaluations and explore the dynamic behavior of model parameters during training, particularly within transformer architectures and federated learning frameworks. This focus on improved evaluation methodologies, encompassing aspects like output distribution analysis and the integration of domain knowledge, aims to enhance model robustness, generalizability, and ultimately, the trustworthiness of AI systems in various fields, from cybersecurity to healthcare and revenue management.
Papers
July 19, 2024
March 13, 2024
February 22, 2024
December 6, 2023
October 25, 2023
September 26, 2023
March 23, 2023
July 15, 2022
March 21, 2022