Interpretable Control

Interpretable control focuses on developing control systems whose decision-making processes are transparent and understandable, addressing the "black box" problem inherent in many machine learning approaches. Current research emphasizes the use of sparse, model-based reinforcement learning algorithms, differentiable decision trees, and techniques like sparse identification of nonlinear dynamics (SINDy) to create more interpretable policies, often alongside methods for disentangling latent spaces in generative models. This pursuit is crucial for building trust in AI systems, particularly in safety-critical applications, and for facilitating the design of more robust and reliable controllers across diverse fields like energy management and engineering.

Papers