Novel Identifiability Test
Novel identifiability tests address the crucial challenge of uniquely determining model parameters from observed data, a problem hindering reliable inference in various fields. Current research focuses on developing theoretical frameworks and algorithms for establishing identifiability in diverse models, including linear structural causal models, deep generative models, and differential-algebraic systems, often leveraging techniques from algebraic geometry, independent component analysis, and neural networks. These advancements are significant because they improve the reliability and interpretability of models across numerous applications, from causal inference and parameter estimation in dynamical systems to graph embedding and personalized learner modeling. Ultimately, ensuring identifiability enhances the trustworthiness and practical utility of scientific models and machine learning algorithms.
Papers
Sequentially learning the topological ordering of causal directed acyclic graphs with likelihood ratio scores
Gabriel Ruiz, Oscar Hernan Madrid Padilla, Qing Zhou
Systems Biology: Identifiability analysis and parameter identification via systems-biology informed neural networks
Mitchell Daneker, Zhen Zhang, George Em Karniadakis, Lu Lu