Differential Testing
Differential testing, a software testing method comparing outputs of different implementations of the same algorithm, is increasingly used to ensure the reliability of machine learning (ML) systems, particularly deep learning libraries and federated learning frameworks. Current research focuses on leveraging large language models (LLMs) to automatically generate diverse test inputs and oracles, improving the efficiency and effectiveness of this approach across various applications, including medical rule engines and automated speech recognition. This methodology is crucial for identifying subtle bugs and vulnerabilities in complex ML systems, enhancing their robustness and trustworthiness in critical applications.
Papers
October 15, 2024
June 12, 2024
April 16, 2024
February 26, 2024
February 16, 2024
September 1, 2023
July 2, 2023
February 11, 2023
November 14, 2022
July 25, 2022