Interpretable Fault
Interpretable fault diagnosis aims to identify and explain the causes of malfunctions in complex systems, moving beyond "black box" machine learning models to provide human-understandable insights. Current research emphasizes developing novel algorithms, such as those based on decision rule sets, integrated large language models, and attention mechanisms, to improve both the accuracy and explainability of fault detection and classification, particularly in scenarios with imbalanced or limited data. This work is crucial for enhancing the reliability and safety of critical systems across various domains, from industrial processes to energy grids, by enabling timely intervention and informed decision-making.
Papers
May 31, 2024
February 8, 2024
November 9, 2023
February 3, 2023
April 15, 2022