Sequence of Sequence
Sequence-to-sequence (Seq2Seq) modeling focuses on learning relationships between input and output sequences of varying modalities, aiming to improve tasks like machine translation, anomaly detection, and code generation. Current research emphasizes enhancing contextual awareness within Seq2Seq models using transformer architectures and novel training methods like reinforcement learning from human feedback (RLHF) and self-improvement techniques, often incorporating specialized modules for handling nested sequences or tree structures. These advancements are driving improvements in various applications, including natural language processing, robotic control, and scientific data analysis, by enabling more accurate and efficient processing of complex sequential data.
Papers
Transfer Entropy Bottleneck: Learning Sequence to Sequence Information Transfer
Damjan Kalajdzievski, Ximeng Mao, Pascal Fortier-Poisson, Guillaume Lajoie, Blake Richards
Sequence learning in a spiking neuronal network with memristive synapses
Younes Bouhadjar, Sebastian Siegel, Tom Tetzlaff, Markus Diesmann, Rainer Waser, Dirk J. Wouters