Paper ID: 2211.14500

Deep neuroevolution to predict primary brain tumor grade from functional MRI adjacency matrices

Joseph Stember, Mehrnaz Jenabi, Luca Pasquini, Kyung Peck, Andrei Holodny, Hrithwik Shalu

Whereas MRI produces anatomic information about the brain, functional MRI (fMRI) tells us about neural activity within the brain, including how various regions communicate with each other. The full chorus of conversations within the brain is summarized elegantly in the adjacency matrix. Although information-rich, adjacency matrices typically provide little in the way of intuition. Whereas trained radiologists viewing anatomic MRI can readily distinguish between different kinds of brain cancer, a similar determination using adjacency matrices would exceed any expert's grasp. Artificial intelligence (AI) in radiology usually analyzes anatomic imaging, providing assistance to radiologists. For non-intuitive data types such as adjacency matrices, AI moves beyond the role of helpful assistant, emerging as indispensible. We sought here to show that AI can learn to discern between two important brain tumor types, high-grade glioma (HGG) and low-grade glioma (LGG), based on adjacency matrices. We trained a convolutional neural networks (CNN) with the method of deep neuroevolution (DNE), because of the latter's recent promising results; DNE has produced remarkably accurate CNNs even when relying on small and noisy training sets, or performing nuanced tasks. After training on just 30 adjacency matrices, our CNN could tell HGG apart from LGG with perfect testing set accuracy. Saliency maps revealed that the network learned highly sophisticated and complex features to achieve its success. Hence, we have shown that it is possible for AI to recognize brain tumor type from functional connectivity. In future work, we will apply DNE to other noisy and somewhat cryptic forms of medical data, including further explorations with fMRI.

Submitted: Nov 26, 2022