Domain Specific
Domain-specific adaptation of large language models (LLMs) focuses on enhancing their performance and reliability within specialized fields by overcoming limitations stemming from data scarcity and domain-specific terminology. Current research emphasizes developing effective methods for data curation, including synthetic data generation and techniques like knowledge distillation to transfer knowledge from domain-specific to general-purpose models, alongside novel architectures like graph-oriented databases for improved performance and maintenance. This work is crucial for broadening the applicability of LLMs to diverse sectors, improving efficiency in areas like finance, healthcare, and scientific research, and addressing concerns about bias and hallucination in sensitive domains.
Papers
RIRO: Reshaping Inputs, Refining Outputs Unlocking the Potential of Large Language Models in Data-Scarce Contexts
Ali Hamdi, Hozaifa Kassab, Mohamed Bahaa, Marwa Mohamed
LAW: Legal Agentic Workflows for Custody and Fund Services Contracts
William Watson, Nicole Cho, Nishan Srishankar, Zhen Zeng, Lucas Cecchi, Daniel Scott, Suchetha Siddagangappa, Rachneet Kaur, Tucker Balch, Manuela Veloso