Paper ID: 2204.02484

From implicit learning to explicit representations

Naomi Chaix-Eichel, Snigdha Dagar, Quentin Lanneau, Karen Sobriel, Thomas Boraud, Frédéric Alexandre, Nicolas P. Rougier

Using the reservoir computing framework, we demonstrate how a simple model can solve an alternation task without an explicit working memory. To do so, a simple bot equipped with sensors navigates inside a 8-shaped maze and turns alternatively right and left at the same intersection in the maze. The analysis of the model's internal activity reveals that the memory is actually encoded inside the dynamics of the network. However, such dynamic working memory is not accessible such as to bias the behavior into one of the two attractors (left and right). To do so, external cues are fed to the bot such that it can follow arbitrary sequences, instructed by the cue. This model highlights the idea that procedural learning and its internal representation can be dissociated. If the former allows to produce behavior, it is not sufficient to allow for an explicit and fine-grained manipulation.

Submitted: Apr 5, 2022