Paper ID: 2303.06695

The tree reconstruction game: phylogenetic reconstruction using reinforcement learning

Dana Azouri, Oz Granit, Michael Alburquerque, Yishay Mansour, Tal Pupko, Itay Mayrose

We propose a reinforcement-learning algorithm to tackle the challenge of reconstructing phylogenetic trees. The search for the tree that best describes the data is algorithmically challenging, thus all current algorithms for phylogeny reconstruction use various heuristics to make it feasible. In this study, we demonstrate that reinforcement learning can be used to learn an optimal search strategy, thus providing a novel paradigm for predicting the maximum-likelihood tree. Our proposed method does not require likelihood calculation with every step, nor is it limited to greedy uphill moves in the likelihood space. We demonstrate the use of the developed deep-Q-learning agent on a set of unseen empirical data, namely, on unseen environments defined by nucleotide alignments of up to 20 sequences. Our results show that the likelihood scores of the inferred phylogenies are similar to those obtained from widely-used software. It thus establishes a proof-of-concept that it is beneficial to optimize a sequence of moves in the search-space, rather than optimizing the progress made in every single move only. This suggests that a reinforcement-learning based method provides a promising direction for phylogenetic reconstruction.

Submitted: Mar 12, 2023