Audio Captioning
Audio captioning aims to automatically generate natural language descriptions of audio content, bridging the gap between audio and text modalities. Current research focuses on improving caption quality, diversity, and efficiency through advancements in model architectures like diffusion models and transformers, often incorporating large language models for improved semantic understanding and evaluation. This field is significant for advancing audio understanding and multimedia applications, with ongoing efforts to address challenges such as data scarcity, evaluation metric limitations, and the development of more robust and generalizable models.
Papers
SLAM-AAC: Enhancing Audio Captioning with Paraphrasing Augmentation and CLAP-Refine through LLMs
Wenxi Chen, Ziyang Ma, Xiquan Li, Xuenan Xu, Yuzhe Liang, Zhisheng Zheng, Kai Yu, Xie Chen
DRCap: Decoding CLAP Latents with Retrieval-augmented Generation for Zero-shot Audio Captioning
Xiquan Li, Wenxi Chen, Ziyang Ma, Xuenan Xu, Yuzhe Liang, Zhisheng Zheng, Qiuqiang Kong, Xie Chen
EnCLAP++: Analyzing the EnCLAP Framework for Optimizing Automated Audio Captioning Performance
Jaeyeon Kim, Minjeon Jeon, Jaeyoon Jung, Sang Hoon Woo, Jinjoo Lee
Expanding on EnCLAP with Auxiliary Retrieval Model for Automated Audio Captioning
Jaeyeon Kim, Jaeyoon Jung, Minjeong Jeon, Sang Hoon Woo, Jinjoo Lee