Device Seq2seq Generation
Device seq2seq generation focuses on adapting powerful sequence-to-sequence models, like transformers, for efficient execution on resource-constrained devices. Current research emphasizes developing parameter-efficient architectures, such as modified transformers and novel approaches like transforming graph neighborhoods into sequences, to overcome memory and computational limitations. This area is crucial for enabling on-device applications of advanced language processing and other sequence-based tasks, expanding accessibility and reducing reliance on cloud infrastructure. The development of pretrained, readily fine-tunable models further accelerates practical deployment.
Papers
March 11, 2024
October 13, 2022
February 16, 2022