Paper ID: 2201.05222
Assemble Foundation Models for Automatic Code Summarization
Jian Gu, Pasquale Salza, Harald C. Gall
Automatic code summarization is beneficial to daily software development since it could help reduce the requirement of manual writing. Currently, artificial intelligence is undergoing a paradigm shift. The foundation models pretrained on massive data and finetuned to downstream tasks surpass specially customized models. This trend inspired us to consider reusing foundation models instead of learning from scratch. Thereby, we propose a flexible and robust approach for automatic code summarization, based on neural models. We assemble available foundation models, such as CodeBERT and GPT-2, into a single neural model named AdaMo. Moreover, we utilize Gaussian noise as the simulation of contextual information to optimize the latent representation. Furthermore, we introduce two adaptive schemes from the perspective of knowledge transfer, namely continuous pretraining and intermediate finetuning, and design intermediate stage tasks for general sequence-to-sequence learning. Finally, we evaluate AdaMo against a benchmark dataset for code summarization, by comparing it with state-of-the-art models.
Submitted: Jan 13, 2022