Domain Specific
Domain-specific adaptation of large language models (LLMs) focuses on enhancing their performance and reliability within specialized fields by overcoming limitations stemming from data scarcity and domain-specific terminology. Current research emphasizes developing effective methods for data curation, including synthetic data generation and techniques like knowledge distillation to transfer knowledge from domain-specific to general-purpose models, alongside novel architectures like graph-oriented databases for improved performance and maintenance. This work is crucial for broadening the applicability of LLMs to diverse sectors, improving efficiency in areas like finance, healthcare, and scientific research, and addressing concerns about bias and hallucination in sensitive domains.
Papers
AttackQA: Development and Adoption of a Dataset for Assisting Cybersecurity Operations using Fine-tuned and Open-Source LLMs
Varun Badrinath Krishna
DARD: A Multi-Agent Approach for Task-Oriented Dialog Systems
Aman Gupta, Anirudh Ravichandran, Ziji Zhang, Swair Shah, Anurag Beniwal, Narayanan Sadagopan
Asynchronous Tool Usage for Real-Time Agents
Antonio A. Ginart, Naveen Kodali, Jason Lee, Caiming Xiong, Silvio Savarese, John Emmons
The Universal PDDL Domain
Patrik Haslum, Augusto B. CorrĂȘa
Evaluating LLMs for Targeted Concept Simplification forDomain-Specific Texts
Sumit Asthana, Hannah Rashkin, Elizabeth Clark, Fantine Huot, Mirella Lapata