DaG LLM

DaG LLM represents a family of large language models (LLMs) specifically adapted and fine-tuned for Korean language processing, demonstrating the broader trend of tailoring LLMs to specific languages and tasks. Current research focuses on improving LLM efficiency through techniques like adaptive data engineering and reference-based inference acceleration, as well as enhancing their capabilities for diverse applications such as machine translation (including code-switching scenarios), named entity recognition, and autonomous driving (predicting lane changes). This work contributes to advancing both the theoretical understanding of LLMs and their practical deployment across various domains, particularly in low-resource language settings.

Papers