Paper ID: 2111.06206

Defining and Quantifying the Emergence of Sparse Concepts in DNNs

Jie Ren, Mingjie Li, Qirui Chen, Huiqi Deng, Quanshi Zhang

This paper aims to illustrate the concept-emerging phenomenon in a trained DNN. Specifically, we find that the inference score of a DNN can be disentangled into the effects of a few interactive concepts. These concepts can be understood as causal patterns in a sparse, symbolic causal graph, which explains the DNN. The faithfulness of using such a causal graph to explain the DNN is theoretically guaranteed, because we prove that the causal graph can well mimic the DNN's outputs on an exponential number of different masked samples. Besides, such a causal graph can be further simplified and re-written as an And-Or graph (AOG), without losing much explanation accuracy.

Submitted: Nov 11, 2021